Saturday, November 30, 2019

Spirituality as a factor of well lived life

Introduction Human beings constantly assert that there is only one life to live. This thus compels them to make the most out of it as it is considered to be short. The cornerstone to a good life is happiness which according to the dictionary, is the expression of intense joy and contentment usually classified by most as an emotion, a notion which religious experts sharply differ with. They instead classify happiness as the sum of all factors considered as constituting a good life.Advertising We will write a custom research paper sample on Spirituality as a factor of well lived life specifically for you for only $16.05 $11/page Learn More Indicators of a good life In determining the factors that constitute a well lived life, different indicators are used which vary from individual to individual. While some may consider happiness as the leading indicator, others consider spirituality. In deciding whether a person leads a good life or not, Abraham Maslow, i n his 1943 paper, A Theory of Human Motivation (Harriman, 1946), suggested a hierarchical classification of needs. His research was solely based on the assessment of mentally healthy people who were achievers of their generation. These ‘specimens’ were chosen as they were role models and seemed to be the most content with their ways of life. The most basic needs were at the bottom of the pyramid which was of a physiological nature, fundamental to the survival of the human body. Others that followed were safety, love, esteem and self-actualization. The epitome of this classification was the desire of a person to be something more he already is. In this level of needs, lies the desire to be spiritual. Understanding spirituality Spirituality is defined as immaterial reality, a notion that allows a human being to understand the essence of his existence. The practices of prayer and meditation are the ways in which people connect to the spiritual world and grow their inner se lf. They are thus more contented with their own lives and the measure of this contentment is beyond that encompassed in mere happiness. It summary, spirituality is a level higher than normal happiness; in the broadest meaning of the word. People who are spiritual are at peace and co-exist harmoniously with fellow humans, nature, the entire universe and the divine realm. They unequivocally believe in immateriality and their needs transcend those Maslow described in his hierarchy. Spirituality has largely been associated with a religious experience; however, with the changing patterns and shift to secularism in the western culture (Burkhardt and Nagai-Jacobson, 2002), there has been a push to dissociate the two.Advertising Looking for research paper on philosophy? Let's see if we can help you! Get your first paper with 15% OFF Learn More This has led to the emergence of lay spirituality which captures all experiences which make up the human world but attempting to distan ce itself with the acquisitive views. This concept accepts all practices of meditation which they rank as very useful for human development but do not associate with prayers as there is no belief in God or any other supernatural being. This notion thus encompasses pluralism, personalized beliefs and openness to newer ideas that may not be tolerated by any particular religious doctrine. Spirituality, therefore, goes beyond religion as even atheists who are skeptic towards the existence of spirits also subscribe to it. The new definition of the term details the connection of a human to some force or energy which leads them to a deep self. Conclusion Spirituality, according to the above discussion is understood in many different ways depending on personal translation. The only point of convergence is that all those who have achieved spirituality are at the highest level of the human needs realization. They have achieved peace between themselves and their surroundings and that concludes that they are happy with the way they lead their lives. Contentment with life, as a consequence of spirituality, points to happiness and hence a good life. In conclusion, spirituality is the best indicator of a well lived life in comparison to the other indicators. References Burkhardt, M. A and Nagai-Jacobson, M. G. (2002). Spirituality: living our connectedness. New York. Delmar, Thomson Learning Inc. Harriman, P. L. (1946). Twentieth century psychology: recent developments in psychology. The philosophical library, Inc. This research paper on Spirituality as a factor of well lived life was written and submitted by user Zoie C. to help you with your own studies. You are free to use it for research and reference purposes in order to write your own paper; however, you must cite it accordingly. You can donate your paper here.

Monday, November 25, 2019

Practice Experience, Decision-Making and Professional Authority The WritePass Journal

Practice Experience, Decision-Making and Professional Authority Abstract Practice Experience, Decision-Making and Professional Authority .   I focus upon an incident where I worked alone with a service user suffering domestic violence.   There is no legal definition of domestic violence (DV). However, the government describes DV as â€Å"any incident of threatening behaviour, violence or abuse (psychological, physical, sexual, financial or emotional) between adults who are or have been in a relationship together or between family members, regardless of gender and sexuality† (Home Office 2012 [online]). Most reported DV cases are directed by men towards women. DV also occurs in same sex relationships and in a relatively small number of cases, by women against men. The case study is described, and my experiences of decision-making and how I exercised professional authority brought out.  Ã‚   I explore the theories surrounding the areas I look at in the case study, particularly the ways theory links with practice. I also look at the legal policy framework which is relevant to the case study. I also bring out personal skills including communication and look at the role they played. 2. Case Study I started my placement with a domestic violence agency in an inner city London borough in March, 2012.   The agency works exclusively with service users fleeing domestic violence.   Ã‚  The incident I have chosen for this case study involved lone working with a service user who had approached the organisation that I am on placement with for support regarding domestic violence abuse she was suffering at home. The service user will be referred to as SS in accordance with the Data Protection Act 1989.  Ã‚   SS is a French national who had relocated to the United Kingdom with her mother at the age of 14. Her relationship with her mother had broken down because she remained in a relationship with the perpetrator (Perp) at her mother’s disapproval. She had been married to the Perp for three years. They have two children, age 14 months and 2 months, her first child is a girl and the new born baby a boy. The perp started to be controlling after she had their first child. After t heir second child was born the abuse started happening more frequently. On a particular occasion, the perp grabbed SS, threw her against the wall and hit her several times across the face. She approached my placement agency for support after she was advised by another agency. SS attended as a duty. Therefore, the onus was on me as the duty staff to assess her circumstances and offer appropriate advice and support to safeguard SS and her children. As such, I had a role of responsibility, with the future wellbeing of SS partly   in my hands.     I prepared for the DV2 assessment in line with the agency’s lone working policy. I signed SS in and informed my peers of the room where the assessment was taking place; I also booked her in the duty book. I made sure the assessment room I choose was available, clean, spacious and comfortable as there was a professional from another agency and her children with her also. I also made sure that SS had a private space to discuss her issues and express her feelings without interruptions from her children. I had provided toys to distract her eldest child whilst the assessment was going on.   SS informed me during the assessment that some of the documents that she might need to present at the Homeless option s Centre (HOC) were at home, she offered to quickly rush home to pick them up whilst the perpetrator (Perp) was still at work. I advised her that the risk associated with such action might be too great for the children and herself, as the perp might walk in on them. I gave her options to either get a police escort or ask a friend who lives near the house to get such documents for her. By so doing I was able to manage the risk to SS and her children. I used the DV2 assessment form to understand her needs, whilst also working together with SS to respond to her crisis situation. I was able to address behaviours that posed a risk to her and the children, I recognised and acknowledged the risk to SS if she returns to the house alone. She agreed that going back to the family house will put the children and her at risk. The risk was assessed and plans were put in place to manage such risk and minimise the risk of further abuse to her and the children in accordance to the GSCC code of practice and Every Child Matters agenda, 2004. Through the DV2 assessment, I also able to identify the need for safe and comfortable accommodation to manage such identified risk to SS and her children. However, my concern was that she might go back to the abusive relationship if appropriate support was not available.   SS would still need to pass the habitual residency test (HBT) as an in EEA national to ascertain her rights to public funds in the United Kingdom because of SS’s nationality (French). If she fails this test (HRT), she might not be eligible for benefits, and the stress of managing with two children without an income might be too much for her to cope with alone. Hence, without adequate support available, she might return to the abusive environment from where she is trying to flee.   I wrote a housing letter to the HOC to support her claim for domestic violence and also offered to attend the HOC centre with her to advocate on her behalf. By so doing I was able to mitigate the risk to which SS and the children were exposed. I gave her a voice and also considered the children’s situation and the statutory support that might be available for them. I searc hed for local refuges to address her housing need, made a referral to children’s social services, provided food vouchers and made arrangements for them to be accommodated away from the perp’s area of influence. 3. Theoretical Context / Legal Policy and Framework, Application to Case Study This case can be seen in the context of a number of theories relevant to the situation, and also in the context of current legal policy and legal framework. This section will look at these connections, first at the theoretical context and next at the legal situation. 3.1 Theoretical Context and Case Study Application The psychologist Maslow suggested that all human beings have a number of needs which need to be satisfied in order for a person to live at his or her full potential. These needs, he also suggested, form a hierarchy, that is, they are ordered. ‘Lower’ needs are required to be satisfied first, before other needs come into play   (Maslow 1943).   Physiological needs are the most basic, including things we need to do to survive (eating, sleeping etc.).   Next come security needs, that is the need for shelter and access to health services. Once these are satisfied, social needs become relevant. These are the needs for feeling part of a group, affection and similar. These are frequently satisfied by work, friends and family.  Ã‚   Higher level needs are the need for esteem, that is, for feeling good about oneself and social recognition, and the need for ‘self-actualisation’,   the need for personal growth and fulfilling one’s ability to the full ( Zastrow et al 2009). In terms of this model, SS was clearly struggling with needs at the lowest level, physiological and security needs. She was finding it difficult to access funds for her children to eat, and given that the perp. lived in the family home, her housing needs were threatened. SS’s health would also be under threat should she return to the family home. As such, the model predicts that SS would be feeling exceptionally insecure and vulnerable: partially met or unmet human needs are associated with increasing vulnerability (De Chesnay and Anderson 2008, p. 489). Another theoretical model is provided by crisis intervention theory’. This is a particularly useful model as it is directed towards practical action (Coady and Lehman 2007).  Ã‚  Ã‚  Ã‚   It is based around the idea that a crisis presents both a challenge (in an extreme form) and an opportunity (Roberts 1995). In other words, a crisis can lead to positive change. One leading developer of the idea was Erkison (1950) CITE) who looked at the role   played by crisis in the maturation of typical human beings. Eric Lindemann (1944) developed a systematic model to deal with crisis. A model suggested by Golan (1978) is useful for this case study. He suggested four stages: first, the person suffering the crisis develops a subjective response to the situation. Next, this leads to upset or lack of balance, previous ways of dealing with problems dont apply. At this stage, though it is chaotic, there is some hope for new approaches.   Thirdly, this particular crisis can link to unres olved conflicts in the person. Finally, the first three stages lead to new opportunities to develop new ways of dealing with potentially damaging situations. It is the role of the professional to help the service user see these new ways of responding   (Roberts 1995). In terms of the case study, SS seems to be at a crisis point, and one which led to the development of new ways of dealing with her abusive partner.  Ã‚   One incident led to SS presenting to the agency, but this crisis incident was the catalyst for realising that her previous attempts to deal with the situation of violence, through staying with her abusive partner, were not working. Though greatly distressed, particularly by the upheaval she and her children were experiencing, SS learnt new ways of dealing with the situation, primarily removing herself from the family home and asking authorities for support in finding a new home. I personally found psychodynamic theory, which aims to uncover the reasons for domestic abuse in early childhood situations, less useful to this particular case study. The ideas about how rage from childhood is visited upon an adult’s current partner (Sanderson 2008) are, I felt, useful as a background, however psychodynamic theory seems to advocate long-term therapeutic treatments which just aren’t possible or appropriate given the immediate emergency of the situation. However, I did find that ideas like this helped me step back and understand that sometimes people’s actions stem from very deeply rooted issues which are hard to tackle. This has helped me overcome an early frustration with some client’s inability, seemingly, to see what is going on clearly. I felt the more practically focussed therapeutic techniques were more useful. These included person-centred and task-centred approaches.  Ã‚   Person-, or client-, centred therapy was developed by Rodgers in the 1940’s. It suggests that the client is at the centre of any counselling process, and that it is the task of the counsellor (or, in this case, the social services professional) to understand how the client sees the world. Without this understanding, it is not possible to help the client move forward.   The person-centred approach advocates avoiding lecturing, manipulating, bribing, directing or otherwise trying to change the client’s behaviours from outside. Rather, the need is to empower the client to grow (Vincent 2005).   There are a number of techniques which can be used, including empathy, congruence (letting the client see you as you are), and positive regard (Jarvis et al 2002).  Ã‚   I found this theory particularly useful, as it made me realise t he need to abandon judgement of SS, and get to know how she saw the situation. The idea of congruence helped me see that I need not present an entirely blank, professional face, but could inject something of my own personality into our meetings. The task-centred approach is time-limited, that is (unlike many psychodynamic therapies) it is carried out over a time period fixed in advance. It is therefore useful in situations like this one where only a limited period of time is available. The idea was developed in the USA, but has become widespread elsewhere. It involves client and professional agreeing together some goals to be reached over a clearly defined period of time. It emerged from a background where social work was hampered by a psycho-analytic perspective on client behaviour, and thus offered a much-needed way to focus upon specifics and deliverables. The approach involves mutual agreement about goals, problems which the service user can see for themselves and which they can work on by themselves between sessions. The focus is upon what the user wants to change (Wilson et al 2008). I found this approach a useful one to combine with a person-centred perspective. I was able to agree with SS things she wanted to change (living with her abusive partner) and we agreed tasks to complete to achieve this overall goal, including contacting other agencies for housing advice. Ideas about risk management and risk assessment were also very useful in this particular case, as there was a risk of harm from the perp. for the client, and perhaps also for social service professionals who became involved. Because perceptions of risk are highly subjective, there is a need to objectify the existing risk(s) as far as possible to try and eliminate as much bias as one can. Normative models of risk address how to make the best decision when there are a number of possible options or ways forward are useful: they allow the assessment of how likely certain outcome are   (Messer and Jones 1999, p. 90). As this situation involves young children, structured risk assessment models are useful, as they allow the situation to be assessed in terms of childrens needs as well as the mothers (Harne 2011). Many now advocate shifting away from risk assessment looking at single factors to looking at the interaction of factors across individual, social and cultural domains (Chalk and K ing 1998, p. 277) 3.2 Legal and Policy Framework, and Case Study Application There are a number of legal and policy documents which are also relevant to the case study.   The Domestic Violence, Crime and Victims Act 2004 was designed to extend the protection available to vulnerable adults and children, and included a new offence of ‘causing or allowing the death of a child or vulnerable adult’, designed to address cases where two partners failed to admit responsibility for child injury. It also made common assault an arrestable offence, added new powers to fine offenders, and changed non-molestation orders to allow non-compliance to attract prison sentences of up to 5 years.   The circumstances under which a case can be heard without a jury are also extended (Guardian 2009). This Act offers a range of additional protections for the victims of domestic violence, in terms of criminal proceedings which might be brought against the perpetrator. For the case study above, SS has not reached the point of deciding to press charges against her partner . My first concern is to make sure she has secure accommodation and is safe from further abuse. However, in time, she will consider the possibility of legal action against her partner, and I feel the Act allows extra protection for her which might make her more likely to consider it. As the situation is a complex one, a number of other legal frameworks and policies are relevant.   Because SS is unable to return to the family home because of risk of further abuse, the Housing Act 1996 is also relevant. Under the act SS is likely to be eligible to be housed by her local authority, as it places a duty on authorities to advise and possibly house people if they are under threat of becoming homeless, have a priority need, are not intentionally homeless and have a local connection to the area (amongst other criteria) (Housing Act 1996). These things apply in SS’s case. Additionally, the code of guidance for local authorities in regards to homelessness was published in 2008. This spells out the duty of local authorities more clearly, and also suggests a need for social services and housing bodies to work together more closely (Department for Education and Skills 2008). However, because SS is originally from France, there is a question regarding her access to public funds. In order to benefit from the help she needs, she needs to pass the Habitual Residence Test. The Habitual Residence test was developed as a way to ensure that only those people with a connection to the UK can claim benefit here.   The concept is not legally defined, and in practice a number of markers are involved in decision making, including length of stay, continuity of stay, the persons intentions and the nature of their residence (Currie 2008). It is claimed that the concept of habitual residence is more stringent than the concept of ordinary residence found elsewhere in law. It is likely that the presence of the children, SS’s history and her desire to remain in the country will all count in her favour, however it represents another obstacle to the security of her future (Harris 2000).  Ã‚   Additionally, the 2002 Nationality, Asylum and Immigration Act restricts e ntry and leave to stay in the UK. Finally, I was also influenced by the Data Protection Act 1989, which protects the confidentiality of client data, for example dictating that I refer to clients only by initials or pseudonyms, and by the GSCC Code of Practice. The latter is particularly important, as it provides the framework within which social workers should operate in the UK.   Part of the guidelines are concerned with the need to protect and promote the interests of the client, establish their trust and confidence, and promote their independence. These aspects are particularly interesting in the light of the theory discussed above, as they are broadly in line with the aims of client-centred and task-centred approaches. 5. Conclusion This essay has addressed a case study taken from my experience working with victims of domestic violence.   I have tried to show how theory is relevant to the situation I describe, and how legal issues are also relevant.   It was necessary for me to make a number of decisions throughout the experience I describe, however perhaps the most important skill I brought to play was that of communication. I had to communicate not only with other agency staff about this case, but also with multiple outside agencies (housing, benefits and similar) and, perhaps most importantly, with the client. The section on theory above has pointed out some of the therapeutic perspectives which were useful, and the client-centred approach, with its emphasis upon empathy and understanding, have been particularly helpful to me in the communication process. I have also used feedback from other staff members and reflective feedback processes to understand the impact of how I communicate to clients, and use t his feedback and reflection to make improvements to my skill set.   SS’s case history, like all cases of domestic violence I have come across, is complex and requires an equally complex set of skills on the part of the social worker to produce the best possible outcomes. 6. References Chalk, R A and King, P (1998) Violence in Families: Assessing Prevention and Treatment Programs, National Academies Press, USA Coady, N and Lehman, P (2007) Theoretical Perspectives for Direct Social Work Practice: A Generalist-Eclectic Approach (2nd edn), Springer Publishing Company, USA Currie, S (2008) Migration, Work and Citizenship in the Enlarged European Union, Ashgate Publishing, Ltd., UK De Chesnay, M and Anderson, B A (2008) Caring For The Vulnerable: Perspectives in Nursing Theory (2nd edn), Jones Bartlett Learning, Sudbury, MA Department for Education and Skills (2008) ‘Homelessness Code of Guidance’, HMSO, London Erikson, E (1950) Childhood and Society, WW Norton, NY. Golan, N (1978) Treatment in Crisis Situations, Free Press, New York The Guardian (2009a) ‘Domestic Violence Act’, [online] (cited 27th May 2012) available from guardian.co.uk/commentisfree/libertycentral/2009/jan/13/domestic-violence-act The Guardian (2009b) ‘Immigration, Asylum and Nationality Act 2006’, [online] (cited 26th May 2012) available from guardian.co.uk/commentisfree/libertycentral/2009/jan/15/immigration-asylum-nationality-act Harne, L (2011) Violent Fathering and the Risks to Children: The Need for Change, The Policy Press, Bristol Harris, N S (2000) Social Security Law in Context, Oxford University Press, Oxon. The Home Office (2012) ‘Domestic Violence’ [online] (cited 28th May 2012) available from homeoffice.gov.uk/crime/violence-against-women-girls/domestic-violence/ Jarvis, M, Putwain, D and Dwyer, J (2002) Angles on Atypical Psychology, Nelson Thornes, Cheltenham, Glos Lindemann, E (1944) ‘Symptomatology and management of acute grief’,   American Journal of Psychiatry, 101, 141 -148. Maslow, A (1943) ‘A theory of human motivation’, Psychological Review, 50, 370-96. Messer, D J and Jones, F (1999) Psychology and Social Care, Jessica Kingsley Publishers, UK Sanderson, C (2008)   Counselling Survivors of Domestic AbuseAuthorChristiane Sanderson, Jessica Kingsley Publishers, UK Vincent, S (2005) Being Empathic: A Companion For Counsellors And Therapists, Radcliffe Publishing, UK Wilson, K, Ruch, G and Lymbery, M (2008) Social Work: An Introduction to Contemporary Practice, Pearson Education, Harlow, Essex Zastrow, C and Kirst-Ashman, K K (2009)   Understanding Human Behavior and the Social Environment (8th edn), Cengage Learning, Belmont, CA

Friday, November 22, 2019

Analyze the handover between two WLAN, two Wimax and two UMTS networks

Analyze the handover between two WLAN, two Wimax and two UMTS networks ABSTRACT In recent years, telecommunication has flourished extensively so much that several areas of studies coexist now thanks to multiple technologies. One of them is heterogeneous handover which is a concept that aims to provide continuity of connection while crossing different networks. In this thesis, our main objective is to analyze the handover between two WLAN, two Wimax and two UMTS networks. The vertical handover decision is taken on the basis of various algorithms such as variance-based algorithm, taguchi algorithm, which calculates the variance of parameters such as delay, jitter, bandwidth and packet loss for the above networks, and selection of the network having most parameters with minimum score. These algorithms are calculated and the decision factors for each wireless network are compared, in order to detect and trigger a vertical handover. The factors can be classified as beneficial, i.e., the larger, the better, or on the basis of cost, i.e., the lower, the better. This algorithm is also compared with other algorithms such as MEW (Multiplicative experiment weighting), SAW (Simple Additive Weighting), TOPOSIS (Technique for order preference by similarity to ideal solution) and GRA (Grey Relational Analysis). These algorithms are appropriate for different traffic classes. Simulation results for the proposed algorithm in Matlab is discussed and compared with other multiple attribute decision making algorithms on the basis of bandwidth, jitter, delay etc. It can be seen that the proposed algorithm causes the minimum packet delay than others. Jitter is also comparatively less than other algorithms. Besides, it provides the highest bandwidth than any other MADM algorithm. Keywords:-UMTS, SAW, MEW, GRA, TOPSIS, WLAN and Wimax Chapter-1 Introduction 1.1 History of mobile services ‘ The journey of mobile telephony began with the 1st generation services. The design for it was developed by AMPS (Advanced mobile phone system) in 1970 and it is based on analog cellular technology. The data bandwidth provided by the system was just 1.9 kbps and it used TDMA multiplexing. Then, the 2nd generation of mobile services was introduced in 1981.The 2G systems are still largely used for voice calls. The data bandwidth provided was 14.4 kbps [1] and the technology used was TDMA and CDMA. It is based on digital technology and also provided short messaging services or SMS along with voice communication. Similarly, it provided circuit switched data communication services at low speed. In 1999, the technology switched to 2.5 G, which used GPRS, EDGE as the standards. It provided higher throughput for data service up to 384 kbps. Later, in 2002, the 3G services were introduced, providing high quality audio, video and data services. Which also provided broadband data capabilities up to 2 Mbps. It mainly uses packet switched technology which utilizes the bandwidth more efficiently. In 2010, when 4th generation of cellular technology was introduced, it was expected to complement and replace the 3G networks. The key features of 4G mobile networks is that it can deliver information anywhere and anytime using seamless connection.4G network is an IP based network which gives access through collection of various radio interfaces. Its network provides access to best possible service with seamless handoff, combining multiple radio interfaces into a single network for subscriber to use. Thus, users have different services with an increased coverage. It does not matter whether there is failure or loss of one or more networks, the 4G technology keeps all the networks integrated into IP based system, which require vertical handoff for seamless connection between the networks. As the number of users are responsible for enhancing the qu ality of 4G service, the very process becomes an indispensable component. While the 4G technology has its genesis in the idea of invasive computing, software defined radio is the prominent adhesive behind the entire process. Here the software defined radio is programmable and able to transmit and receive a wide range of frequencies while emulating any transmission format. It should offer high speed of 100 Mbits for stationary mobile and 20 Mbits while travelling having network capacity 10 times faster than 3G networks. This increases the download speed to 1 second for 1Mbyte of file compared to 200 seconds in 3G networks. Which should support fast speed volume data transmission at lower cost. The obviously it should provide seamless connection between multiple wireless networks and mobile networks. For this, the support of vertical handoff is essential. Apart from it, it is expected that seamless multimedia services are provided it being an IP-based system, which also replaces SS7 (signaling system 7) that consumes considerable amount of bandwidth. Due to IP-based network, optimum usage of bandwidth is expected. 1.2 Motivation ‘ There are several communication systems such as the Ethernet, Wireless LAN, GPRS and 3G coexisting with their own different characteristics such as bandwidth, delay and cost. Wireless mobile users require high quality of service (QoS) and one of the factors directly affecting QoS is the number of call drops. Therefore, it has to be reduced or eliminated, possibly, to achieve high QoS. The number of call drops experienced by a system mainly depends on its channel assignment and handoff schemes. Since majority of WLANs are deployed in the areas like hotels, cafes, airports, offices, etc, the speed of the users are generally normalized within the WLAN coverage area. In WLAN/Cellular network interworking, a user can either have access from micro layer or from macro layer of cellular network depending whether he wants slow or fast speed. Basically, in cellular networks, user speed is the primary factor to determine whether a user is fast or slow and that information is subsequently used to handle vertical handoff. Since the speed information about the users are not directly available when they are in WLAN coverage area ,the vertical handoff schemes employed in cellular networks are not directly suitable for solving vertical handoff problem in Cellular/WLAN interworking. This raises to many important questions. Assume that the speed of each user in a WLAN coverage area is within small threshold value. Normally, when a user is outside WLAN coverage area, the type of user whether fast or slow, is determined on the basis of the user speed. Now the question is how to determine the type of user, whether fast or slow, when it is within the WLAN coverage area? A fast user can become slow temporarily due to various conditions such as traffic signals, turns, etc. Is the speed alone sufficient to determine whether a user is slow or fast? If not then (i) What other parameters can be used to determine whether a user is fast or slow and, ii) How can they be obtained? In fact, this lack of clarity inspired the researcher to develop a vertical Handoff decision algorithm to solve vertical handoff problems. 1.3 Scope of thesis ‘ With greater mobility and easy acces, telecommunication consumers have become demanding, seeking services anywhere and anytime. Thus, the integration of WLAN (Wireless LAN), Wimax and cellular networks such as WCDMA (wideband CDMA) system should be error free for seamless efficient communication which is the 4th generation technology. The seamless and efficient handover between different access technologies known as vertical handover is essential and remains a challenging problem. The 4G is seen as convergence and integration of various wireless access technologies. The existing cellular systems such as GSM and CDMA2000 support low bandwidth over a large coverage area. However, the wireless networks such as WLAN supports high bandwidth over a short coverage area. Moreover one of the major design issues of 4G is the support of vertical handover. Interestingly this is different from a ‘horizontal handoff’ between different wireless access points that use the same technology. Switching between two dissimilar networks for mobile terminal (e.g. between UMTS WLAN) is termed as Vertical Handover A vertical handover involves two different network interfaces for different wireless technologies. It can happen in two ways. Firmly when the mobile user moves into the network that has higher bandwidth and limited coverage, a vertical handover request is generated since the mobile user may want to change its connection to the higher bandwidth network to enjoy the higher bandwidth service. This type of vertical handover is called downward vertical handoff. Secondly when the mobile user moves out of its serving higher bandwidth network, it has to request a vertical handover to change its connection to the network with low bandwidth and wide coverage. This type of vertical handover is called upward vertical handover. Chapter-2 Research objectives 2.1 Objectives The present research aims at making comparison between various existing multiple attribute decision making algorithms for realization of vertical handoff such as MEW (Multiplicative Exponent Weighting), SAW (Simple Additive weighting), TOPSIS (Technique for order preference by similarity to ideal solutions) and GRA (Grey relational Analysis) which are MADM (Multiple attribute decision making) ranking algorithms and the proposed vertical decision algorithm. For this comparison, various heterogeneous networks such as UMTS (Universal Mobile Telecommunication services), WLAN (Wireless Local area networks), WiMAX (Worldwide interoperability for microwave access) need to be taken into consideration. Comparison will be mostly on the basis of various parameters such as bandwidth, jitter, packet delay, packet loss, etc. In addition, the comparison may be for different types of traffics such as data connections and voice connections. As all the above mentioned algorithms are multiple attribute algorithm, due importance is given to parameters to be considered in the algorithms. The performance evaluation of the proposed decision algorithm should be done on the basis of parameters mentioned above. For various types of traffics, how the algorithm performs can be seen. Depending on the performance, we can conclude Which algorithm is suitable for which traffics. In voice connections, 70 % importance is given to the packet delay and jitter i.e. by assigning weights to these parameters and equal distribution of weights to the other parameters or attributes. If any of the algorithms performs well then that particular algorithm can be considered to be best suited for voice connections. In da ta connections, 70% importance is given to the parameters such as bandwidth i.e. by assigning the weight to the bandwidth and remaining weight is equally distributed among the parameters. If any of the algorithm performs well in this case, then the particular algorithm is suitable for the data connections. The ultimate aim being development of a decision making algorithm which works well for both voice connections and data connections. ‘ 2.2 Methodology In order to realize vertical handoff using the existing multiple attribute decision making algorithm and evaluate the performance of each of the algorithms along with the proposed algorithm, we are considering the selection of network in 4G environment. Here, three types of networks such as UMTS (Universal Mobile Telecommunication services), WLAN (Wireless Local area networks), WiMAX (Worldwide interoperability for microwave access) are combined and there will be two networks of each type. In this thesis, four decision criteria are evaluated and compared to realize vertical handoff considering the available bandwidth (Mbps), packet delay (ms), packet jitter and packet loss (per 106 packets). The range of value for various parameters are as follows: Available bandwidth for UMTS network 0.1-2Mbps, Packet delay for UMTS network 25-50ms, Jitter for UMTS network 5-10ms. Bandwidth for WLAN network 1-54Mbps, Packet delay for WLAN network 100-150ms, Jitter for WLAN network 10-20ms. Bandwidth for Wimax network is 1-60Mbps, while for packet delay for Wimax network is 60-100ms, and Jitter for Wimax network is 3-10ms.The values for the weights to be assigned for different services are considered as Case1: packet delay and jitter are given 70 % importance and the rest is equally distributed among other parameters, this case is suited for voice connections and whereas Case 2: available bandwidth is given as 70% importance, this case is suited for data connections. For each algorithm, 10 vertical decisions were considered of each case separately. Performance evaluation is done for two cases namely voice connections and data connections. These cases are evaluated using MATLAB v7.6 release 2009 software tool. Next, by using artificial neural network, we can design a system to take vertical handoff decision. Here, input parameters such as samples of received signal strength and bandwidth is applied to input layer, hidden layer does some processing depending upon the number of neurons and the algorithm chosen. The output layer gives the ID of selected candidate network. In, ANN-based method, there is handoff between WLAN and Cellular networks. Here, two parameters are taken into consideration i.e. RSS a Bandwidth as an input for neural network. The RSS samples for training neural network for both WLAN cellular networks are -60dBm,-70 dBm,-80 dBm,-90 dBm. Similarly, bandwidth samples for WLAN are 54, 30,10,1 Mbps. Bandwidth samples for cellular network are 14.4, 9.6, 4.5,2 kbps. By using combination of RSS bandwidth parameters, we could make 256 samples of input for ANN. These samples of output samples for vertical handoff decision are also fed to ANN. Using Levenberg-Marquardt method for ANN, 180 samples are used for training, 38 samples for validation and 38 samples for testing. Based on ANN developed system, it could take vertical handoff decision from cellular to WLAN. Lastly, ns-2 software tool can also be used. NS-2 simulation is done using nodes of 802.11 and nodes of 802.16 Wimax. Four nodes of 802.11 nodes (Access points) are used and four nodes of 802.16 nodes (Base station) NIST module of 802.16e are used in ns-2. In this case, out of the existing algorithm best algorithm with best score is selected for triggering vertical handoff decision. Here, in this case, various parameters such as Bandwidth, Bit error rate, trust level etc were considered for vertical handoff decision. This can be tested against various available traffics in ns2 such as CBR (Constant bit rate) which corresponds to real time traffic (for voice communication) and FTP (file transfer Protocol) which corresponds to non real time traffic. The performance evaluation for various traffics can be done on various parameters such as Packet delivery ratio, throughput, jitter and packet dropping ratio etc with simulation time. 2.3 Related Work Enrique Stevens Navarro and Vincent W.S.Wong [2], in their paper, have compared four different vertical handoff decision algorithm namely, MEW (Multiplicative Exponent Weighting), SAW (Simple Additive Weighting), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), and GRA (Grey Relational Analysis). All four algorithms allow different parameters (e.g., bandwidth, delay, packet loss rate, cost) to be considered for vertical handoff decision [2]. Both Authors found that MEW, SAW, and TOPSIS provide almost the same performance to all four traffic classes. Only GRA gives a slightly higher bandwidth and lower delay for interactive and background traffic classes. Jose.D.Martinez, Ulises Pinedo-Rico and Enrique Stevens Navarro, in their paper, have given a comparative analysis of the multiple attribute decision algorithms [3]. In this paper, the authors provided a simulation study of several vertical handoff decision algorithms in order to understand its performance for different user applications. They considered two different applications: voice and data connections. Algorithms such as SAW (Simple Additive Weighting) and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) are suitable for voice connections. These algorithms provided the lower values of jitter and delay packet available in a 4G wireless network. In a data connection case, GRA (Grey Relational Analysis) and MEW (Multiplicative Exponent Weighting) algorithms provided the solution with highest available bandwidth necessary for this application. Chapter-3 Classification of vertical handoff algorithms 3.1 Need for vertical handoff Currently, the trend in mobile communications is not one network technology replacing another, but the interoperability between different overlapping networks. Therefore it is obvious that many wireless networks will coexist and can complement each other in an all-IP based heterogeneous wireless network. This can facilitate mobile users’ access to internet easily and connectivity of IP anywhere, anytime using the ‘best’ possible network. This is mainly due to the fact that different wireless networking technologies have their own advantages and drawbacks. Access to various wireless systems results in heterogeneous networks that can offer overlapping coverage of multiple networks with different technologies. For example, low cost and high speed Wi-Fi (WLAN) network will be accessible within limited range of ‘hot-spot’ areas and will be complimented with cellular network offering wide area coverage such as UMTS or Wimax. As a consequence, some fundament al problems must be solved for the users to navigate a 4G wireless network seamlessly. For this, mobile terminal equipped with multiple interfaces to handle different technologies is required. Furthermore, applications running on mobile terminal with multi-mode terminals in a 4G environment can switch between different networks supporting different technologies without degrading the quality of the link. But the Internet routing model forces mobile terminal to find new IP address for an interface while roaming in another network in 4G environment. It is assumed that applications can easily manage mobility and can handoff to the best possible network; of course some method is required to adjust media streams to the bandwidth available. 3.1 Types of vertical handoff There are various ways to classify vertical handoff algorithms. In this thesis, we have classified the vertical handoff algorithms into four groups based on the handoff criteria as given below: RSS-based algorithms: RSS is used as the main handoff decision criteria in this group. Different strategies have been developed to compare the RSS of the present point of attachment. In this RSS-based horizontal handoff decision, strategies are classified into the following six subcategories namely: relative received signal strength, relative received signal strength with threshold, relative received signal strength with hysteresis and threshold, and prediction techniques. For vertical handoff decision, relative received signal strength cannot be applicable since the signal strength from different types of networks cannot be compared directly due to the different technologies involved. For example, different thresholds for different networks. Furthermore, other network parameters such as bandwidth are combined with RSS in the vertical handoff decision process. Bandwidth-based algorithms: Available bandwidth for a mobile terminal is the main criterion in this group. In some algorithms, both bandwidth and signal strength information are used in the decision process. Depending on whether RSS or bandwidth is the main criteria an algorithm is categorized either as signal strength based or bandwidth based. Cost function based algorithms: This class of algorithms combine metrics such as monetary cost, security, bandwidth and power consumption in a cost function based algorithm, and the handoff decision is made by comparing the score of this function for the candidate networks. Different weights are allotted to the different input parameters depending on the network conditions and user preferences. ANN and fuzzy logic based algorithms: These vertical handoff decision algorithms attempt to use richer set of inputs than others for making handoff decisions. When a large number of inputs are used, it is very difficult or impossible to develop, formulate handoff decision processes. Analytically hence, it is better to apply machine learning techniques to formulate the processes. The survey reveals that for the fuzzy logic and artificial neural networks based techniques can be used. The Fuzzy logic systems allow expertise of humans for qualitative thinking to be incorporated as algorithms to enhance the efficiency. If there exists comprehensive set of input-desired output pattern, artificial neural networks can be trained to create handoff decision algorithms. By using consistent and real-time learning techniques, the systems can monitor their performance and change their own structure to create very effective handoff decision algorithms. 3.3.1 RSS based vertical handoff In this, the handoff decisions are made by comparing RSS (received signal strength) of the current network with the preset threshold values. These algorithms are less complex and may be combined with other parameters such as bandwidth, cost etc to have a better handover decisions. We describe here three RSS based algorithms in the following sections. A) ALIVE-HO (adaptive lifetime based vertical handoff ) algorithm – Zahran, Chen and Sreenan proposed algorithm for handover between 3G networks and WLAN by combining the RSS with an estimated lifetime (duration over which the current access technology remains beneficial to the active applications). ALIVE-HO always uses an uncongested network whenever available. It continues using the preferred network (i.e. WLAN) as long as it satisfies the QoS requirements of the application [5]. Two different vertical handoff scenarios let us discuss: Moving out of the preferred network (MO) and Moving in to the preferred network (MI), where the preferred network is usually the underlay network that provides better and economical service. Hence, extending the utilization of WLAN as long as it provides satisfactory performance is the main consideration of vertical handoff algorithm design. We observe the method through the following scenarios. In the first scenario, when the MT moves away from the coverage area of a WLAN into a 3G cell, a handover to the 3G network is initiated. The handover is done under following conditions: (a) RSS average of the WLAN falls below predefined threshold. (MO threshold) and (b) the estimated lifetime is at least equal to the required handoff signaling delay. The MT continuously calculates the RSS mean using the moving average method.[4] [K] = Here [k] is RSS mean at time instant k, and Wav is the window size, a variable that changes with velocity of the velocity of mobile terminal. Then, the lifetime metric EL [k] is calculated by using [k], ASST Application signal strength threshold),S[k],RSS change rate. EL[k] = [k] ‘ ASST S[k] ASST (Application signal strength threshold) chosen to satisfy the requirements of the active applications. S[K] represents RSS decay rate. In second scenario, when the MT moves towards a WLAN cell, the handover to the WLAN is done if the average RSS is larger than MI Threshold. WLAN and the available bandwidth of the WLAN meet the bandwidth requirement of the application. Table 3.1 given below shows lost frames during the handoff transition area for the received stream. ASST (in dBs) -90 -89 -88 -87 -86 -85 Lost frames_100kbit/s 13.3 5 3 0.67 0 0 Lost frames_300kbit/s 38 28 4 0.33 0 0 Table 3.1 Frames lost corresponding to ASST [5] Based on the results obtained and subjective testing, the optimal value for UDP based streaming is chosen as -86dB. Firstly by introducing EL[k], the algorithm adapts to the application requirements and reduces unnecessary handovers. Secondly, there is an improvement on the average throughput for user because MT prefers to stay in WLAN cell as long as possible. However, packet delay grows due to the critical fading impact near the cell edges, which may result in severe degradation in the user perceived QoS. This phenomenon results in a tradeoff between improving the system resource utilization and satisfying the user QoS requirements. This issue can be critical for delay sensitive applications and degrade their performance. Here ASST is tuned according to various system parameters, including delay thresholds, MT velocities, handover signaling costs and packet delay penalties. B) Algorithm on Adaptive RSS Threshold Mohanty and Akyildiz, in their paper, have proposed a WLAN to 3G handover decision method [6]. In this method, RSS of current network is compared with dynamic RSS threshold (Sth) when MT is connected to a WLAN access point. We observe the following notations with reference to fig 3.1 which shows a handoff from current network (AP) referred as WLAN, to the future network (BS), referred as 3G. Fig 3.1 Analysis of handoff process [6] * Sth: The threshold value of RSS to initiate the handover process. Therefore, when the RSS of WLAN referred to as ORSS (old RSS) in fig 3.1 drops below Sth, the registration procedures are initiated for MT’s handover to the 3G network. * a:The cell size we assume that the cells are of hexagonal shape. d: It is the shortest distance between the point at which handover is initiated and WLAN boundary. We observe the Path loss Model given by; Pr(x) = Pr (d0) + Where x is the distance between the Access Point and Mobile Terminal, and Pr (d0) is the received power at a known reference distance (d0). The typical value of d0 is 1 km for macrocells, 100m for outdoor microcells, and 1m for indoor picocells. The numerical value of Pr (d0) depends on different factors such as frequency, antenna heights, and antenna gains, is the path loss exponent. The typical values of ranges from 3 to 4 and 2 to 8 for macrocellular and microcellular environment. – Is a Zero mean Gaussian random variable that represents the statistical variation in Pr(x) caused by Shadowing. Typical std. deviation of is 12 dB. We observe the path loss model applied to the scenario. Pr (a ‘ d) = Pr (a) + Pr (a ‘ d) = Pr (a) + 10 log Sth = Smin + 10 log When the MT is located at point P, the assumption is that it can move in any direction with equal probability, i.e. the pdf of MT’s direction of motion is f = – †.’.(1) As per assumption, that MT’s direction of motion and speed remains the same from point P until it moves out of the coverage area of WLAN. As the distance of P from WLAN boundary is not very large, this assumption is realistic. The need for handoff to cellular network arises only if MTs direction of motion from P is in the range [ ] can move in both directions. Where = arctan otherwise the handoff initiation is false. The probability of false handoff initiation is using (1) is Pa = 1 – P (unfavourable event ) = 1 ‘ P ( favourable event ) = 1 – = 1 – †.’.(2) When the direction of motion of Mobile Terminal from P is, the time it takes to move out of the coverage area of WLAN cell i.e. old base station is given by time = from fig 3.1 Cos = Sec = , x = d sec Hence t = t= †.’.(3) Pdf of is f = â€Å"{† 1/(2†_1 ) -†_1†Ã¢â‚¬ _1 {0 otherwise from (3) , t is a function of i. e. t = g ( ) in [ ] [3] g(†)=dsec†/v Therefore pdf of t is given by f_t (t)=’_i’f_(† (†_i ) )/(g^’ (†_i ) ) †.’.(4) Where †i are the roots of equation t = g ( ) in [ ] And for each of these roots f†(†i)= for i = 1 and 2 f = + f = †.’.(5) Where g is derivative of g given by g = †.’.(6) = = g = t Using (5) (6), the pdf of t is given by f = †.’.(7) { 0 otherwise The probability of handoff failure is given by Pf = { 1 { P ( t ) { 0 †.’.(8) – handoff signaling delay and P ( t ) – is the probability that t when P (t ) = = = = arccos( ) †.’.(9) Using (8) and (9) we get Pf = { 1 { cos { 0 Pf = Since, †1 =arctan ( d/v†) Pf = Here, it shows that Probability of handoff failure depends on distance from point p to the boundary of the cell, velocity and handoff signaling delay †. The use of adaptive RSS threshold helps reducing the handoff failure probability as well as reducing unnecessary handovers. The exact value of Sth will depend on MT’s speed and handoff signaling delay at a particular time. Adaptive Sth is used to limit handoff failure. However, in this algorithm, the handoff from 3G network to a WLAN is not efficient when MTS traveling time inside a WLAN cell is less than the handover delay. This may lead to wastage of network resources. 3.3.2 Bandwidth based vertical handoff algorithm A Signal to Interference and Noise Ratio (SINR) Based algorithm Yang, in his paper,[7] presented a bandwidth based vertical handover decision method between WLANs and a Wideband Code Division Multiple Access (WCDMA) network using Signal to Interference and Noise Ratio (SINR) algorithm[7]. The SINR calculation of the WLAN (wireless LAN) signals is converted into an equivalent Signal to Interference and noise Ratio to be compared with the Signal to Interference and noise Ratio of the Wideband Code Division Multiple Access channel †AP =†AP [(1+ †BS/ †BS) WBS/ WAP -1] where †AP and †BS are the Signal to Interference and noise Ratio at the mobile terminal when associated with Wireless local area network and Wideband Code Division Multiple Access, respectively. † is the dB gap between the uncoded Quadrature Amplitude Modulation and channel capacity, minus the coding gain, and †AP equals to 3dB for Wireless local area network and †BS equals to 3dB for Wireless local area network, as stated by the authors. WAP and WBS are the carrier bandwidth of wireless local area network and Wideband Code Division Multiple Access links. Signal to Interference and Noise Ratio based handovers can provide users with higher overall throughput than RSS based handovers since the available throughput is directly dependent on the Signal to Interference and Noise Ratio, and this algorithm results in a balanced load between the wireless local area network and the Wideband Code Division Multiple Access networks. But such an algorithm may also int roduce excessive handovers with the variation of the Signal to Interference and Noise Ratio causing the node to hand over back and forth between two networks, commonly referred to as ping-pong effect. A Wrong Decision Probability (WDP) Prediction Based algorithm C.Chi, Cao, Hao and Liu, in their paper ‘Modeling and analysis of Handover algorithms’, have proposed a Vertical Handover decision algorithm based [8] on the Wrong Decision Probability (WDP) prediction. The Wrong Decision Probability is calculated by combining the probability of unnecessary handoff and the missing handoff. Assume that there are two networks i and j with overlapping coverage, and bi and bj are their available bandwidth. An unnecessary handoff occurs when the mobile terminal is in network i and decides to handoff to j, but bj is less than bi after this decision. A missing handoff occurs when the mobile terminal decides to stay connected to network i, but bi is less than bj after this decision. A handover from network i to network j is initiated if Pr † x l0 or bj – bi ‘ L, where Pr is the unnecessary handover probability, † is the traffic load of network i, l0 = 0.001, and L is a bandwidth threshold. The authors show that this alg orithm is able to reduce the Wrong Decision Probability and balance the traffic load; however, received signal strength is not considered. A handoff to a target network with high bandwidth but weak received signal is not desirable as it may bring discontinuity in the service. 3.3.3 Cost Function based vertical handoff algorithm A Cost Function Based algorithm with Normalization and Weights Distribution Hasswa, N. Nasser, and H. Hassanein, in their paper ‘A context-aware cross-layer archi- tecture for next generation heterogeneous wireless networks’, have proposed a cost function based handover decision algorithm in which the normalization and weights distribution methods are provided. A quality factor of network is used to evaluate the performance of a handover target candidate as Qi = WCCi + WSSi + WPPi + WdDi + WfFi where Qi is the quality factor of network i, Ci, Si, Pi, Di and Fi stand for cost of service, security, power consumption, network condition and network performance, and Wc, Ws, Wp, Wd andWf are the weights of these network parameters. Since each network parameter has a different unit, a normalization procedure is used and the normalized quality factor for network n is calculated as Wc(1/Ci) WSSi WP(1/Pi) Qi = †__________ + __________ + _______ max((1/C1),’..(1/Cn)) max(S1,’..Sn) max((1/P1),’..(1/Pn)) WdDi WfFi + ________ + ________ max (D1,’..Dn) max(F1,’..Fn) A handoff necessity estimator is also introduced to avoid unnecessary handovers High system throughput and user’s satisfaction can be achieved by introducing Hasswa’s algorithm, however, some of the parameters such as security and interference levels are difficult to calculate. A Weighted Function Based Algorithm R. Tawil, G. Pujolle, and O. Salazar in their paper presented a weighted function based[10] Vertical handover decision algorithm which transfers the Vertical handover decision calculation to the visited network instead of the mobile terminal. The weighted function of a candidate network is defined as Qi = WBBi +WDp 1/DPi+WC 1/Ci Where Qi represents the quality of network i, Bi, DPi and Ci are bandwidth, dropping probability and monetary cost of service, and WB, WDp and WC are their weights, where, WB +WDp +WC = 1 The candidate network with the highest score of Qi is selected as the handover target. By giving the calculation to the visited network, the resource of the mobile node can be saved so that the system is able to achieve short handoff decision delay, low handoff blocking rate and higher throughput. However, the method requires extra cooperation between the mobile node and the point of attachment of the visited network, which may cause additional delay and excessive load to the network when there are large number of mobile nodes. 3.3.4 ANN based vertical handoff algorithm A Multilayer Feedforward Artificial Neural Network Based Algorithm N. Nasser, S. Guizani, and E. Al-Masri, in their paper, developed a [11] Vertical handover decision algorithm based on artificial neural networks (ANN). The topology of the ANN consists of an input layer, a hidden layer and an output layer. The input layer consists of five nodes representing various parameters such as cost, RSS, bandwidth etc of the handoff target candidate networks. The hidden layer consists of variable number of nodes (neurons) which are basically activation functions. The output layer has one node which generates the ID of the candidate network of the handover target. All the neurons use sigmoid activation function. The authors have assumed the same cost function as in this work and also for ANN training they have generated a series of user preference sets with randomly selected weights. Then the system has to be trained to select the best candidate network among all the candidates. The authors have reported that by properly selecting the learning rate and the acc eptable error value, the system is able to find the best available candidate network successfully. Nevertheless, the algorithm suffers from a long delay during the training process which may lead to connection breakdown. A Method That Uses Two Neural Networks Pahlavan, in his paper, has proposed two neural [12] network based decision methods of vertical handoff. Here, only the vertical handoff mechanism is discussed. In the method for vertical handoff, an artificial neural network is used for handoffs from the Wireless local area network to the General Packet Radio Service (GPRS). The Artificial neural network consists of an input layer, two middle layers and an output layer. Mobile node does periodical measurements of RSS and five most recent samples of RSS are fed into the ANN. The output is a binary signal: The value ‘1’ leads to a handover to the General Packet Radio Service, and the value ‘0’ means that the mobile terminal should remain connected to the access point. The ANN is trained before it is used in the decision process. Training is done by taking a number of RSS samples from the access point while minimizing the handover delay and ping-pong effect. This algorithm can reduce the number of handovers by eliminating the ping-pong effect, but the paper does not provide details on how exactly the neural network is trained and why the particular parameters are taken into consideration. This algorithm also has the short coming of the algorithm complexity and the training process to be performed in advance. Summary: From the above discussion, it can be concluded that RSS based Vertical handoff algorithms can be used between microcellular and macro cellular networks. The network candidate with most stable RSS being the selection criteria. These algorithms are simple, but due to the fluctuation of RSS, they are less reliable. Bandwidth based Vertical handoff algorithms can be used between any two heterogeneous networks. The network candidate with highest bandwidth is the selection criteria. These algorithms are simple. But, due to the changing available bandwidth, these algorithms are less reliable. Cost function based Vertical handoff algorithms can be used between any two heterogeneous networks. Here, the inputs are various parameters such as cost, bandwidth, security etc The network candidate with highest overall performance is the selection criteria. These algorithms are complex. But, due to the difficulty in measuring parameters such as security etc, they are less reliable. ANN and Fuzzy logic based Vertical handoff algorithms can be used between any two heterogeneous networks. Here, the inputs are various parameters such as RSS, cost, bandwidth, security etc depending on different methods. The network candidate with highest overall performance is the selection criteria. These algorithms are very complex. But, due to training of system, they are highly reliable. ‘ Chapter-4 Algorithms and Methodologies 4.1 Variance based vertical handoff algorithm Proposed algorithm is variance based algorithm which calculates the variance of parameters such as delay, jitter, bandwidth and packet loss for various networks such as UMTS,WLAN,Wimax and the network with most of the parameters with minimum variance being selected. In our proposed algorithm, handoff metrics such as delay, bandwidth, jitter, packet loss etc are included Fig 4.1 Flow Chart of variance based algorithm Variance = ‘(x-†)’^2/N , where x is any metrics such as delay, bandwidth, jitter etc and † is its mean of a set of samples of the particular parameters. N is set of samples. In this algorithm, whenever the signal strength of a mobile terminal drops below threshold ,there is request from mobile terminal for handoff to the network which is accessible. The algorithm checks whether the visitor network available or not, if visitor network is available ,it will broadcast required parameters such as packet delay, jitter, packet loss and bandwidth etc. The variance of the broadcasted parameters are calculated based on the number of samples received for each parameter. Then, the candidate network (Visitor network) having most of the minimum variance of the parameter is selected. In this case, variance of delay, jitter, packet loss and bandwidth are considered for the set of 100 samples received. Here, variance of packet delay is calculated as: †_d^2 = ‘(D-†_d)’^2/N Where, †d is the variance of the packet delay parameter, D is the packet delay at that instant ,†d is the mean of the packet delay values received and N is the total number of samples for packet delay parameters(which is 100 in this case). Similarly, variance of bandwidth is calculated as: †_B^2 = ‘(B-†_B)’^2/N Where, †B is the variance of the Bandwidth parameter, B is the Bandwidth at that instant ,†B is the mean of the bandwidth values received and N is the total number of samples for bandwidth parameters(which is 100 in this case). In the same way, variance of Jitter is calculated as: †_J^2 = ‘(J-†_J)’^2/N Where, †J is the variance of the Jitter parameter, J is the Jitter at that instant ,†J is the mean of the jitter values received and N is the total number of samples for jitter parameters(which is 100 in this case). In the same way, variance of Packet loss is calculated as: †_P^2 = ‘(P-†_P)’^2/N Where, †P is the variance of the packet loss parameter, P is the packet loss at that instant ,†P is the mean of the packet loss values received and N is the total number of samples for packet loss parameters(which is 100 in this case). Out of these variance †_d^2 , †_B^2, †_J^2, †_P^2, the candidate network most of them with minimum values will be selected. Score† =’arg’_(i=1)^MMAX(min †_d^2 , †_B^2, †_J^2, †_P^2) The candidate network which satisfies above equation is selected. Where M is the number of candidate network. 4.2 SNR based vertical handoff algorithm Proposed algorithm is an SNR based algorithm which calculates the value of SNR of parameters such as delay, jitter, bandwidth and packet loss for various networks such as UMTS, WLAN, Wimax with the network with maximum SNR being selected. In our proposed algorithm, handoff metrics such as delay, bandwidth, jitter, packet loss, etc are included Pages: Page 1, Page 2

Wednesday, November 20, 2019

Critique summary of a research study Essay Example | Topics and Well Written Essays - 500 words

Critique summary of a research study - Essay Example The purpose of the study was further described in the background section which clarified that delirium is a rather common occurrence in frail older adults and there is a need for the development of an understanding concerning the factors which influence recovery. The paper further suggests that the results from this study can be used to influence care management and other procedures at hospitals in order to improve the detection and recovery process. The methods section of the study shows that the research conducted concerning the selection and identification of the patients was excellent. The researchers went to great lengths to get the data they needed about the patients. The patient sampling all had baseline mobility and could certainly show increased dependence for care services if their conditions worsened somehow and this was an important factor for this study. Procedures for the study included an interview at the time of admission to the hospital and a follow-up interview after six months which gave the patients further questions to answer. The primary strength of the study is that it leads to conclusive results which allow hospital managers and working staff to use this study as a part of their operational guidelines. For example, knowing that frailty and a poor diagnosis can lead to worse outcomes, caregivers can be aware that a patient who is frail may have problems such as delirium in tow with other medical conditions. Similarly, the study can also raise the importance of recognizing delirium as a possible cause for the problems faced by a patient. The weakness which is present in the study (which was also acknowledged by the scientists conducting the study) is that the sample size was very small to draw overall general conclusions which are universally applicable. The sample size of 77 of which only 50 could be found after six months is surely too

Tuesday, November 19, 2019

Arbitrage Research Paper Example | Topics and Well Written Essays - 250 words

Arbitrage - Research Paper Example In this case, the call price is $1.15 while the put price is $0.55. The stock price of the company does not pay any dividends. This shows that an arbitrage position does not exist for the firm. This is because the difference between the two prices is fairly small and the company already has the stocks at the exercise price (Chen, 2005). The time of expiry is only 60 days and the stocks can perform better than if they were sold for immediate profits by the company. It would therefore get a loss in the long term if arbitrage is exercised. The annual interest rate receivable by the company is a factor in calculating the value of the stock and given that it is 12% then the possibility of receiving prices above this when arbitrage is used is quite minimal. There are other costs that are involved when making a call such as the brokerage fees and taxes on capital gains, it is simpler to hold onto the shares and

Saturday, November 16, 2019

Cincinnati Subs Essay Example for Free

Cincinnati Subs Essay Cincinnati Super Subs employee makeup is mostly young students in college and in high school; management has indicated that the chain has experienced â€Å"below average profitability over the past 18 months† (McShane Von Glinow, 2013, p. 156). The below average profitability is linked to the increase of food wastage, management has taken steps to address the issue from reduction and elimination of food allowance accounts to the increased visual scrutiny given workers. The changes made by management were only successful for a few months, after the managers reduced their time in daily operations the amount of food wastage began to increase. To reduce food wastage management must have a presence in daily operations at the sub chain, create and enforce punishments for violations, set goals and create rewards for the reduction in food wastage. Indications of Problems at Cincinnati Super Subs ​Food wastage at Cincinnati Super Subs is a huge problem that has resulted in the reduction of monthly bonuses given to management. The staff at the chain complains about the amount or lack of food allowances given, this would prompt employees to â€Å"help themselves to food and drinks when the managers aren’t around† (McShane Von Glinow, 2013, p. 156). The reduction in profitability over the past 18 months at stores has indicated the levels of food wastage have significantly impacted management employee relations and overall costs at the chain. The importance of reducing food wastage is not realized by non-management staff, they feel that the amount of food/drinks given away or eaten isn’t a significant amount of profits. Corrective actions taken against employees have reduced staff levels as some employees have quit and warned others about seeking employment at Cincinnati Super Subs. Management failed to accurately identify and corrects the causes of f ood wastage, and in the process caused employee dissatisfaction.

Thursday, November 14, 2019

Candida albicans Essays -- Essays Papers

Candida albicans Candida albicans is a dimorphic fungus. This means that that C. albicans has to different phenotypic forms, an oval shaped yeast form and a branching hyphal form. C. albicans normal habitat is the mucosal membranes of humans and various other mammals including the mouth, gut, vagina, and sometimes the skin. Normally C. albicans causes no damage and lives symbiotically with the human or animal host, even helping to breakdown minute amounts of fiber that are eaten in the host’s diet. The normal bacterial flora of the gut, mouth, and vaginal mucosa act as a barrier to the over growth of fungal infections like C. albicans. Loss of this normal flora is one of the main predisposing factors to an infection by C. albicans. There are many ways that a C. albicans infection may occur. One is the use of antibiotics for an extended period of time to combat a pre-existent bacterial infection. Since the antibiotics used will kill only the bacteria and not any fungus this allows for C. albicans to gain a foot hold over the local mucosa that it is inhabiting, be it the gut, th...

Monday, November 11, 2019

Griswold’s concept of “Culture” from a sociological viewpoint Essay

Griswold explored the concept of â€Å"culture† through two different perspectives, namely through the humanities and anthropology’s viewpoint. With reference to different philosopher’s interpretation of â€Å"culture†, Griswold defines culture broadly as â€Å"a complex whole†, including everything in a social world. Since culture and social world are related; to achieve a fuller understanding of â€Å"culture†, we have to examine the connection between them, and how the two come together. The conceptual tool she uses to investigate the connections between cultures and societies are the â€Å"cultural object†, and the â€Å"cultural diamond†. Cultural objects are given meanings shared by members of the culture, and it is through those meanings that those objects are linked to the social worlds. Therefore, we need to decipher how the meanings came by- Griswold introduces and compares different versions of reflection theory in fu nctionalism, Marxism and Weberian Sociology, whereby culture is seen as a reflection of social life, or vice versa. The mirror theory is based on the assumption that culture is the mirror of social reality, reflecting the social world. This is an idea central to the functionalist and Marxist reflective theory; however, she points out that while they share the same reflection model, the essence of the two is opposites. Under Marx’s view, everything, even human consciousness, starts from and has the history as a product of human labor (homo faber). Culture is a concept largely based on the material forces of production and economic foundation of a society. This â€Å"historical materialism† , and the production relations of society is the true root of culture, therefore , it is the social being that determines men’s existence. However, for Functionalism, culture is based on mutual interdependence of one another to meet the needs of a society. Every component in a society is reflective of others. There are no class struggles, as opposed to Marxist theory. As for Weber’s view of the mirror theory, Weber takes into account that the culture and society relationship is two- way: social actions reflects cultural meanings. He argues that while material interests are still being pursued, the way men pursued their interests shows cultural causes in his famous â€Å"switchman metaphor†. The above modern sociological theories illustrates now culture is related, in many different ways, to the social world. Modern music is an example illustrating how societal actions reflects our changing culture- music has always been a vehicle by which we express our values. In recent years, lyrics containing profanity or vulgar language are so common and are very popular among youngsters. It reflects that our culture has became more outspoken and less censorious. Conservatism is no longer something valued. Take another example- women these days are obsessed with slimming as thin is now commonly regarded as beautiful. Women would go at great lengths to achieve their ideal body shape, just to conform to social pressures. This is an example showing how social phenomena can reflect culture, even though this culture is not necessarily beneficial to the society. Bibliography: Griswold, Wendy. 2004. Cultures and Societies in a CHanging World. CA: Pine Froge Press. Selections.

Saturday, November 9, 2019

Lucent Technologies Case

The financial statement for Lucent Technologies is for September 30, 2003 and 2004. After reviewing the balance sheet I could determine Lucent Technologies Total Assets had increased by 1,052 million. This shows Total assets are in an upward trend and the company has steadily built assets the last year and not decreased them. The company’s goal is to raise profits and one way of raising profits is to increase their assets. Total Liabilities have decreased by 940 million. Total current liabilities have decreased over the year while long term liabilities have increased. In 2003, Lucent Technologies debt to asset ratio was . 83 and in 2004 the debt to asset ratio was . 92 which means . 92 of Lucent Technologies assets were paid for by borrowing money. What this shows is Lucent Technologies may pay a higher interest on money borrowed because their debt to asset ratio is so high. By reducing their debt load and controlling purchases the company can reduce their total debt to asset ratio. Companies acquiring too much debt may have trouble paying creditors which could force them into bankruptcy. Total shareowners’ deficit has decreased over the year. While the company is currently looking at a deficit, they are heading in an upward trend where shareholders could start receiving dividend payouts. Investors reviewing Lucent Technologies current balance sheet may have a hard time investing in the company as much of the assets owned by the company were purchased on credit. Creditors may loan Lucent Technologies money for future investments, but it would be at a higher interest rate as the current debt to asset ratio is high. Another problem creditors and investors may have with the current balance sheet is that Lucent Technologies is only providing them with information from one year. Even though the balance sheet reflects improvements in company profits over the past year it doesn’t provide creditors and investors with enough information to make an informed decision. Creditors and investors would need financial statements for multiple years before investing in the company. By viewing the statement of cash flows, investors are able to determine how much cash comes in and goes out of the company during the year. It shows investors how the company is able to pay for its operations and future growth. Lucent Technologies provided a balance sheet for September 30, 2003 and 2004. There is limited value in the data provided by Lucent Technologies, for investors and creditors to make informative decisions before investing in or leading money to this company. Other financial statements investors and creditors need to view are the income statement and the statement of cash flows. The income statement provides the revenue earned minus expenses incurred over a specific period of time. Investors need to view the statement of cash flow to determine the increases and decreases in cash made by Lucent Technologies.

Thursday, November 7, 2019

buy custom Wallace Group essay

buy custom Wallace Group essay Organizations have different reasons for diversification. One of them is to gain synergy among business units. The Wallace Group diversified vertically by acquiring suppliers, but the business did not perform as expected. This essay will evaluate the most critical problems that the Wallace Group is facing, as well as discuss appropriate strategies to solve the issues and provide an opinion on Wallaces diversification plan. Poor leadership is the first important problem that occurs at Wallace. Leadership is the means through which a person influences others to attain an objective (Hoyes 60). The effectiveness of leaders depends on the followers perception of them and the extent to which they agree with the leaders philosophies and values. The unsatisfactory leadership at the Wallace Group emerges because of the Presidents inability to influence the rest of the workers. First, he does not appreciate the contribution of other employees. Consequently, they become frustrated. Everyone in the company invests to its health and awareness regardless of the position they hold in the organizational structure. Therefore, the failure by the President to harness the potential of the employees has affected the performance of the enterprise negatively. As a result of the inadequate management, the Electronics, Chemicals and Plastics functional groups work as independent units instead of being cohesive parts of the Wallace Group. The lack of coordination has prevented all three divisions of the company from achieving the synergy and complementarity. Leaders must take a systemic thinking approach, which allows them to see how the different parts of the organization fit into the corporations strategy. Since each of the units functions independently, Wallace cannot determine what the problem is, and that is why there was a need to hire a consultant to advice on how to run the organization. Before deciding to diversify, Wallace did not create a strategy that would outline the tasks and the people accountable for them. Consequently, the staff of the organization is determining each persons responsibilities while promoting the technical people at managerial positions despitetheir inexperience. The corporate employees are not efficient enough to provide guidance and coordination to the functional units of the company because they are at their learning stage. The second essential problem at Wallace is the inadequate management of the personnel. The first admission that Wallace made to the consultant was that managing people was problematic. The company has no human resource policy on job descriptions and the modes of deciding the qualification of workers. Thus, the administration hires unqualified people for the engineering and advanced systems. Both the directors of engineering and advanced systems reject them, creating a shortage of skilled employees. The current workers are overwhelmed with their duties, which contributes to their low morale. Moreover, the company has no benefits and reward systems apart from the stock ownership program. The stock option may not motivate the workers enough because it is dependent on the companys performance, which is currently minimal. The absence of a training and development program also highlights the personnel management deficiencies. Those holding the management positions without prior experience cannot improve the situation, and thus, will continue underperforming. The process of training and developing workers relates positively to organizational effectiveness (Niazi 43). The first recommendation for Wallace to solve the above-mentioned issues is to create a collaborative work environment where people from all the units can contribute to the decisions of the company. Such an environment is the first step towards creating policies and strategies that are effective. To achieve this objective, Wallace needs to involve all the vice presidents and program directors in making decisions. Their opinions will provide the administration with a holistic view of the organization and give suggestions on how to eliminate the current problems. Secondly, Wallace should promote the creation of a unified strategy that will outline the role of each functional unit and their contribution to the organizations performance. Leadership is critical in both the creation and implementation of the strategies (Azhar et al. 33). Sincee the realization of the strategy depends on all employees, involving the leaders at various levels, it is critical to encourage the frontline employees to accomplish it. In order to solve the non-cooperation among the functional units, the leaders at Wallace should create cross-functional teams to work on various projects jointly. The teamwork will enhance understanding of each groups contribution and challenges, which in its turn can eliminate conflicts and misunderstandings. The industrial relations should formulate a human resource policy to address the personnel problems. The policy formulation should engage the heads of each of the three units because they understand the skill set they want. As such, only qualified people will be hired, which will reduce the workload of the employees. The corporation should introduce a training program, especially for technical employees who may wish to advance to management levels. The program can enhance understanding between the corporate and the functional groups. Finally, the Wallace Group should create an employee benefit and reward system, which is likely to raise the morale and productivity of the workers. To my mind, the diversification plan was a good idea, but the manner of its implementation was wrong. The company tends to gain from diversification through the synergy benefits that accrue from complementarity. The three units would have completed each other if there was a strategy to coordinate their activities. Although such harmonization would have resulted in spending additional costs (Zhou and Robert 625), it would have been cheaper than the loss the company is experiencing due to the chaos among the units. However, since there was no particular plan for the diversification, the operations director now finds it more expensive to purchase from any of the Wallaces groups than from outside suppliers. Consequently, the diversification venture is more a burden rather than a benefit to the company. In conclusion, the two main problems affecting the Wallace Group include leadership and employee management deficiencies. The recommendations contain the creation of a collaborative workplace, a clear strategy, cross-functional teams and a human resource policy. Buy custom Wallace Group essay

Monday, November 4, 2019

Reactive Power compensation Essay Example | Topics and Well Written Essays - 1750 words

Reactive Power compensation - Essay Example This is due to the fact that it has no reactive power at all. As a matter of fact, its reactive power is equivalent to zero. In this case, the power triangle mimics and horizontal line. This should logically be so noting that the opposite side which represents reactive power has a length of 0 cm. inappropriate power factor can be rectified, paradoxically, through addition of an extra load to the circuit. In essence, the added load is equivalent reactive power acting in an opposite direction. The addition cancels the effects resulting from a load's inductive reactance. Notably, only capacitive reactance can cancel the inductive reactance and hence a parallel capacitor is added to the provided circuit to act as the extra load. As a result of the impact resulting from the two reactance acting in opposite directions, and parallel to each other, the circuit's total impedance becomes equivalent to the entire resistance. This assists in making the impedance phase angle equivalent, or in the least tends towards zero. Having the knowledge that that the un-rectified reactive power is 561.724 VAR (inductive), there is a need to derive the right size of a capacitor to generate an equivalent amount of reactive power. Given that the identified capacitor will act in a direction parallel to the source, the following formula is applied in calculation and it begins with identification of voltage and reactance: But And hence, The simulation is done using a rounded of capacitor value of 29, yielding the following results, True power = 447.002 Apparent power = 447.008 For case 2, where capacity improves power factor to 0.95 lagging, Circuit sketch The circuit has both inductance and resistance and hence the two are combined to form, Given that, P = True power, Q= Reactive power, and S = Apparent power P is given as, S is given as, Q is given as, Redrawing the circuit, we have Resistive/reactive load: For power factor = 0.95 Consequently, This indicates the capacitive reactance XC m ust be Original XL - Improved PF XL = 80.2986 – 16.434 = 63.8646 ohms Simulating this, a 20 is used, as shown True power = 447.

Saturday, November 2, 2019

Quality Management System - Question Essay Example | Topics and Well Written Essays - 750 words

Quality Management System - Question - Essay Example tivity: A Brief Survey†). b) From the above calculations, it can be inferred that the Labor Productivity of Firm A is much less compared to its competitor Firm B and hence Firm A needs to improve its productivity per unit labor employed. Thus, Firm A should focus more into Human Resource Optimization in order to improve their labor productivity. The Plant & Equipment productivity of both the firms is same implying that both have utilized their assets by the same extent. The material productivity is also almost same. In terms of royalty, Firm A seems to be in a better position compared to its competitor Firm B as it has better sales value. The energy productivity of Firm A is much higher than Firm B implying that it has utilized its energy component optimally. The total productivity of both the firms is also almost the same, while Firm A utilizes it energy component better, Firm B optimizes its labor utilization. Thus, it can be inferred from the total productivity that the pros pect of Firm A seems to be slightly superior then Firm B. Problem 2 Histogram for Hours of Overtime It can be inferred from the above histogram that 12 employees which is 40% of the entire number of employees fall in the range of (93 – 185) overtime hours. The range (0-92) overtime hours and (186-278) overtime hours have 7 employees each which is equivalent to 23.3% of the total no. of employees. There are two employees each who fall in the overtime hours range of (279-371) and (372-464). The mean value of the overtime hours is 167.26 hours and it can be noted from the histogram that more than 50% of the total employees have overtime hours less than the mean. To conclude, it can be seen that the hours of overtime appear to be normally distributed amongst the 30 employees. Histogram for Days Absent The above histogram shows that almost half of the total employees (i.e. 14 which is 46.6% of the total) has number of days absent ranging from (0 – 1.2) days. Subsequently th e next highest figure of workers’ absence days falls in the range from (2.6 – 3.8) days. There are 6 employees whose days of absence fall in the range of (1.3 - 2.5) days while there are only 3 employees (10% out of total employees) whose absence days fall in the range between (5.2 - 6.4) days. From the distribution of histogram it can be revealed, more than half of total workers (63% of the total employees) have absence days which are less than the mean value of 1.93 days. To conclude, the distribution of histogram appears to be shifted to the left which means most workers have absence days less than the mean value. Problem 3 The data overtime hours and