Monday, December 30, 2019

Hamlet and Archilles - 1101 Words

The truly great human stories deal with the most basic elements of human emotions and motivations. The Following stories are perfect examples of those elements. Revenge, love, lust, betrayal, loss and grief are all powerful storytelling tools and powerful elements within stories. But stories are also specific, about specific people, specific times and specific cultures. The Iliad is a sweeping war story that ends in the victory of the Greeks even at great costs for the victors. Hamlet is more of a personal tragedy that ends in defeat and death for all. The major players might seem like they could not be more distinct upon their face. But, in fact, the characters of Achilles and Hamlet have notable similarities. They are both ultimately spurred to their pivotal decisions and behaviors by the feeling of vengeance, connected to a strong feeling of duty and even a sense of piety. Yet, because Achilles is a soldier and Hamlet an intellectual prince, their ultimate motivations and actions vary wildly in terms of timing, execution, approach and ultimate success, even though both men are ultimately killed as a result of their actions. Hamlet and Achilles each respond to the death of a person close to them. Hamlet has lost his father, and we see in the beginning of the play that he suffers from a deep melancholy, which Hazen 2

Sunday, December 22, 2019

What Is It s One Of The Big Question About Human Existence

What is Meaningful? Philosophy-1301-001 Charles Clinton Hinkley 5/31/2015 The meaning of life. It’s one of the big questions in philosophy, one of the big questions about human existence. A big part of the Christians or at least to (Page,2015) Page is to believes â€Å"the meaning of life is to fulfill God’s will, live our lives, have a career, make a family, have fun, and then die at the time God appointed for us to die.† (Page, Pg. 1) To some atheist, their belief is that there’s no meaning to life. They believe in evolution, to survive and reproduce. That we’re just a tiny spec of this huge universe. â€Å"It is more important to find what makes this life precious and worth living, rather than hoping that the â€Å"next one† will be better.†(AFA).†¦show more content†¦Since everything in creation came from this God, it too would automatically have meaning and purpose. God’s own individual purpose or meaning is unknown to us, except some people believe that God creates us, loves us, and wants to fellowship with us in eternity. He is the only supreme God, and He has always existed. In the beginning, God is the only thing that existed. If God has meaning, we can assume the environment around us have meaning. If God has a plan, then we can assume everything else that has come into existence has a purpose in that plan. God is sovereign. The meaning of life is similar to a person building something. If a person collects wood, buy tools, and then proceed to hammer the wood together, we can assume that the creation is going to be there for a reason. He or she may be building a house, or a storage building, etc. But none of us would suggest that the man is building it for nothing. Any reasonable person would logically conclude that the man has a purpose for his creation. It was created with purpose in mind. Going back to the introduction, Atheist say that there isn’t a meaning to life. You’re just a tiny grain of sand compared to the universe that we live in. Relaying on evolution to be able to keep surviving and reproducing, to continue to live. If God does not exist,

Saturday, December 14, 2019

Surface Pressure Measurements on an Aerofoil Free Essays

DEN 302 Applied Aerodynamics SURFACE PRESSURE MEASUREMENTS ON AN AEROFOIL IN TRANSONIC FLOW Abstract The objective of this exercise is to measure the pressure distribution across the surface on an aerofoil in a wind tunnel. The aerofoil is tested under several different Mach numbers from subsonic to supercritical. The purpose of measuring the pressure distributions is to assess the validity of the Prandtl-Glauert law and to discuss the changing chracteristics of the flow as the Mach number increases from subsonic to transonic. We will write a custom essay sample on Surface Pressure Measurements on an Aerofoil or any similar topic only for you Order Now As a result of the experiment and computation of data, the aerofoil was found to have a critical Mach number of M=0. 732. Below this freestream Mach number the Prandtl-Glauert law predicted results very successfully. However, above this value, the law completely breaks down. This was found to be the result of local regions of supersonic flow and local shockwaves. Contents Abstract2 Apparatus2 1. Induction Wind Tunnel with Transonic Test Section2 2. Aerofoil model3 3. Mercury manometer3 Procedure3 Theory3 Results4 Discussion8 Transonic Flow8 Analysis9 Conclusion11 Bibliography11 Apparatus 1. Induction Wind Tunnel with Transonic Test Section The tunnel used in this experiment has a transonic test section with liners, which, after the contraction, remain nominally parallel bar a slight divergence to accommodate for boundary layer growth on the walls of the test section. The liners on the top and bottom are ventilated with longitudinal slots backed by plenum chambers to reduce interference and blockage as the Mach number increase to transonic speeds. The working section dimensions are 89mm(width)*178mm(height). The stagnation pressure , p0? is close to the atmospheric pressure of the lab and with only a small error ,is taken to be equal to the settling chamber pressure. The reference staticpressure, p? , is measured via a pressure tapping in the floor of the working section, well upstream of the model so as to reduce the disturbance due to the model. The ‘freestream’ Mach number, M? , can be calculated by the ratio of static to stagnation pressure. The tunnel airspeed is controlled by varying the pressure of the injected air, with the highest Mach number that can be achieved by the tunnel being 0. 88. 2. Aerofoil model The model used is untapered and unswept, having the NACA 0012 symmetric section. The model chord length, c, is 90mm and the model has a maximum chord/thickness ratio of 12%. Non-dimensionalised co-ordinates of the aerofoil model are given in table 1 below. Pressure tappings, 1-8 , are placed along the upper surface of the model at the positions detailed in table 1. An additional tapping, 3a, is placed on the lower surface of the aerofoil at the same chordwise position as tapping 3. The reason for including the tapping on the lower surface is so that the model can be set at zero incidence by equalizing the pressures at 3 and 3a 3. Mercury manometer A multitube mercury manometer is used to record the measurements from the tappings on the surface of the model. The manometer has a ‘locking’ mechanism which allows the mercury levels to be ‘frozen’ so that readings can be taken after the flow has stopped. This is useful as the wind tunnel is noisy. The slope of the manometer is 45 degrees. Procedure The atmospheric pressure is first recorded, pat, in inches of mercury. For a range of injected pressures, Pj, from 20 to 120Psi, the manometer readings are recorded for stagnation pressure (I0? , reference static pressure (I? ), and surface pressure form tappings on the model (In, for n=1-8 and 3a). Theory These equations are used in order to interpret and discuss the raw results achieved from the experiment. To convert a reading, I, from the mercury manometer into an absolute pressure, p, the following is used: p=pat ±l-latsin? (1) For isentropic flow of a perfect gas with ? =1. 4, the frees tream Mach number,M? , is related to the ratio between the static and stagnation pressures by the equation: M? =2? -1p? p0? -? -1? -1. 0(2) Pressure coefficient, Cp , is given by: Cp=p-p? 12 U? 2(3) For compressible flow this can be rewritten as: Cp=2? M? 2pp? -1(4) The Prandtl-Glauert law states that the pressure coefficient, CPe, at a point on an aerofoil in compressible, sub-critical flow is related to the pressure coefficient, CPi, at the same point in in incompressible flow by the equation: CPe=CPi1-M? 2(5) Due to its basis in on thin aerofoil theory, this equation does not provide an exact solution. However it is deemed reasonably accurate for cases such as this in which thin aerofoils are tested at small incidence. The law does not hold in super-critical flow when local regions of supersonic flow and shockwaves appear. The value of the critical pressure coefficient, Cp*, according to local sonic conditions is calculated by: Cp*=10. 7M? 25+M? 263. 5-1for? =7/5(6) The co-ordinates for the NACA 0012 section are as follows: Figure 1-Co-ordinates for aerofoil (Motallebi, 2012) Results Given atmospheric conditions of: Patm=30. 65 in-Hg Tatm=21 °C The following results were achieved: Figure 2-Pressure coefficient vs x/c for M=0. 83566 Figure 3-Pressure coefficient vs x/c for M=0. 3119 Figure 4-Pressure coefficient vs x/c for M=0. 79367 Figure 5-Pressure coefficient vs x/c for M=0. 71798 Figure 6-Pressure coefficient vs x/c for M=0. 59547 Figure 7-Pressure coefficient vs x/c for M=0. 44456 Figure 8-Cp* and Cpminvs Mach Number From figure 7 the critical Mach number is able to be determined. The critical Mach number (the maximum velocity than can be achieved before local shock conditions arise) occurs at the point where the curves for Cp* and Cpmin cross. From figure 7 we can see that this value is, M? =0. 732. Discussion Transonic Flow Transonic flow occurs when ‘there is mixed sub and supersonic local flow in the same flow field. ’ (Mason, 2006) This generally occurs when free-stream Mach number is in the range of M=0. 7-1. 2. The local region of supersonic flow is generally ‘terminated’ by a normal shockwave resulting in the flow slowing down to subsonic speeds. Figure 8 below shows the typical progression of shockwaves as Mach number increases. At some critical Mach number (0. 72 in the case of Figure 8), the flow becomes sonic at a single point on the upper surface of the aerofoil. This point is where the flow reaches its highest local velocity. As seen in the figure, increasing the Mach number further, results in the development of an area of supersonic flow. Increasing the Mach number further again then moves the shockwave toward the trailing edge of the aerofoil and a normal shockwave will develop on the lower surface of the aerofoil. As seen in figure 8, approaching very close to Mach 1, the shockwaves move to the trailing edge of the aerofoil. For M1, the flow behaves as expected for supersonic flow with a shockwave forming at the leading edge of the aerofoil. Figure 9-Progression of shockwaves with increasing Mach number (H. H. Hurt, 1965) In normal subsonic flow, the drag is composed of 3 components-skin friction drag, pressure drag and induced drag. The drag in transonic is markedly increased due to changes to the pressure distribution. This increased drag encountered at transonic Mach numbers is known as wave drag. The wave drag is attributed to the formation of local shockwaves and the general instability of the flow. This drag increases at what is known as the drag divergence number (Mason, 2006). Once the transonic range is passed and true supersonic flow is achieved the drag decreases. Analysis From figure 7, the conclusion was reached that the critical Mach number was 0. 732. This means ultimately that in the experiment local shockwaves should be experienced somewhere along the aerofoil for Mach numbers M=0. 83566, 0. 83119 and 0. 79367. According to transonic theory, these shockwaves should be moving further along the length of the aerofoil as the freestream Mach number increases. To determine the approximate position of the shockwaves it is useful to look again at equation (4). Cp=2? M? 2pp? -1 Assuming constant p? , as static pressure in the test section is assumed to be constant and constant free stream Mach number as well, equation (4) may be written as: Cp=const. pconst. -1 Normal shockwaves usually present themselves as discontinuous data, particularly in stagnation pressure where there is a large drop. To detect the rough position of the shockwave on the aerofoil surface it is useful to look at the detected pressure by the different tappings and scrutinize the –Cpvs x/c graph to see where the drop in pressure occurs. Investigating the graphs for the supercritical Mach numbers yields these approximate positions: M| x/c, %| 0. 835661| 40-60| 0. 831199| 35-55| 0. 793676| 25-45| Figure 10- Table showing approximate position of shockwave According to the theory described earlier, these results are correct as it demonstrates the shockwave moving further along the aerofoil as the Mach number increases. As seen in figure 8, given a sufficiently high Mach number, a shock may also occur on the lower surface of the wing. This can be seen for M=0. 835661, in figure 1, where there is a marked difference in pressure between tappings 3 and 3a. The theoretical curves on each –Cpvs x/c graph were designed using the Prandtl-Glauert law. As mentioned earlier, this law is based on thin aerofoil theory, meaning it is not exact and there are sometimes large errors between the proposed theoretical values and the experimental values achieved. These large errors are seen most clearly in the higher Mach numbers. This is because in the transonic range, where there is a mixture of sub and supersonic flow, local shockwaves occur and the theoretical curves do not take shockwaves into account. Hence, the theory breaks down when the freestream Mach number exceeds the critical Mach number for the aerofoil. At lower Mach numbers, the theoretical values line up reasonably well with those achieved through experiment. There only seems to be some error between the two, mainly arising in the 15-25% range. However, overall the Prandtl-Glauert law seems to be reasonably accurate as long as the Mach number remains sub-critical. The experiment itself was successful. The rough position of the shockwave and the critical Mach number were able to be identified. There are however some sources of inaccuracy or error that can be addressed of the experiment is to be repeated for ‘bettter’ results. Aside from the normal human errors made during experimentation the apparatus itself could be improved. Pressure tapping 1 (the closest to the leading edge) and pressure tapping 8 (the closest to the trailing edge) were placed at 6. 5% and 75% respectively. What this means is that they are not centralized relative to the leading and trailing edge effectively meaning it is not able to be determined whether or not the pressure is conserved. At a zero angle of incidence, the pressure at the tip of the leading edge should be equal to the pressure at the tip of the trailing edge. To improve this pressure tappings should exist at the LE and TE and possibly more pressure tappings across the aerofoil surface to provide more points for recording. Another source of improvement could be using a larger test section so that there is absolutely no disturbance in measuring the static pressure. However, this may only produce a minute difference in the data and may not be worthwhile for such little gain. Conclusion As desired, a symmetric aerofoil was tested in transonic flow and the experimental results were compared to the theoretical values predicted by the PrandtlGlauert law. In the cases where there was a large disparity between experimental and theoretical results, an explanation was given, relying on the theory behind transonic flow. Bibliography H. H. Hurt, J. (1965). Aerodynamics for Naval Aviators. Naval Air Systems Command. Mason. (2006). Transonic aerodynamics of airfoils and wings. Virginia Tech. Motallebi. (2012). Surface Pressure Measurements on an Aerofoil in Transonic Flow. London: Queen Mary University of London. How to cite Surface Pressure Measurements on an Aerofoil, Essay examples

Thursday, December 5, 2019

Alzheimer’s Disease Samples for Students †MyAssignmenthelp.com

Question: Discuss about the Alzheimers Disease. Answer: Introduction Alzheimers disease is an irreversible and progressive disorder causing memory loss and disruption of other cognitive functions which severely interferes with daily life activities. It is the most common type of dementia and accounts for up to 75% of all dementia cases worldwide. It has been estimated that about 25 million people worldwide are affected by dementia of some kind (Qiu, Kivipelto von Strauss, 2009). The incidence of Alzheimers disease is strongly although not solely associated with ageing. The majority of affected individuals age above 65 years, however, early onset of Alzheimers is also been observed. There are more than 413,106 Australians suffering from dementia among which 55% are female and 45% male (Alzheimer's Australia | Statistics, 2017). With the world population ageing at a steady rate the frequency of dementia is expected to double by 2030 and hence is considered as a public health priority presently. Further, the global societal cost of Alzheimers disease is quite high, both in terms of direct burdens (medical and social care) as well as indirect burdens (unpaid caregiving by family and friends). This clinical update aims to address several aspects of the disease including prevailing diagnostic methods, distinction between different types of dementia, pathophysiology, prognosis and potential treatment options. Aetiology and Pathogenesis Scientists believe that Alzheimers is a multifactorial disease resulting from the culmination of a range of different factors, of which increasing age is the most potent risk factor of all. The strong association of the disease with old age is an indication of complex interaction of other risk factors such as genetic susceptibility, psychosocial factors, lifestyle and environmental factors experienced over the lifespan of the patient. Alzheimers is caused by brain cell death like all other types of dementia. Thus it is a progressive neurodegenerative disorder resulting in shrinkage of the overall brain size and decrease in nerve cells and connections between the same. Damage and changes to the brain start occurring from as long as a decade or more before the appearance of any clinical symptoms. Abnormal deposits of a protein called beta-amyloid causes amyloid plaques. Disintegration of another protein called tau tangles which as the name suggest tangles with different neurons of the brain. These beta plaques and tau tangles affects the normal functioning of the neurons and the neurons start losing neural connections (Swerdlow, Burns Khan, 2014). In the initial stages the hippocampus is only affected which is associated with memory functioning. In the later stages as the disease progresses other parts of the brain are affected and brain tissues shrink considerably (Fjell et al., 2014). However, the exact reason of why these plaques and tangles form and the onset of the disease mostly at an old age are still undiscovered. Researches provide several theories and age-related changes like mitochondrial dysfunction, inflammation, production of increased levels of free radicals, etc. which requires further investigation to treat and cure the disease. The disease progresses through three main stages namely Preclinical, Mild Cognitive Impairment and finally Dementia. During the preclinical stage no cognitive or memory impairment is observed, however changes in the brain tissue proceeds. In Mild Cognitive Impairment there are some signs disruption in cognitive functioning may appear but it does not interfere with normal daily activities of the patient (Vos et al., 2015). As the disease further progresses to severe cognitive impairment dementia and memory loss is observed. Several lines of risk factors are associated with Alzheimers. Table. 1 summarizes some of the established risk factors and protective associated with the disease. Aetiological Hypothesis Risk Factors/ Protective Factors Epidemiological Evidence Genetic Risk factors: APOE ?4 allele (Late-onset Alzheimers) Inherited genetic changes (Early-onset Alzheimers) Strong Vascular Risk Factors: Hypertension, high BMI, diabetes, Cardiovascular disorders, cerebrovascular disorders and smoking. Protective Factors: Light to moderate alcohol consumption, antihypertensive therapy. Moderate or Sufficient Psychosocial Protective Factors: High level of education, persistent cognitive and mental stimulating activities, increased social and physical activity. Moderate or Sufficient Nutritional and Dietary Risk Factors: Folate, Vitamin B12 and antioxidant deficiency. Protective Factors: Omega-3 fatty acids and vegetable consumption. Insufficient or Limited. Other (Toxic or inflammatory factors, etc.) Risk Factors: Head injuries, exposure to toxins and electromagnetic fields, depression and hormone replacement theory. Protective Factors: Non-steroidal anti-inflammatory drugs, Insufficient or Limited. Table 1: Risk and Protective Factors of Alzheimers Disease (Qiu, Kivipelto von Strauss, 2009) The public health impact of Alzheimers is profound. As the disease is costly in terms of both personal suffering and economic loss it has become an important facet of public health and health care delivery. Although the immediate clinical symptoms of the disease is limited to memory loss and other cognitive impairments, several non-cognitive secondary clinical features like behavioural disturbances, depression, disruption of daily life activities (Wimo et al., 2013). Several studies have estimated the financial burden of the disease by using self-report and observational tools. One study estimates the cost of the disease to be $38,000 per patient per year although estimates ranging from 50% lower to 50% higher have also been reported (Sloane et al., 2002). The main burden of care is upon informal caregivers. Time spent providing care ranges from 5.9 hours per week for patients with lower severity to 35.2 hours per week for patients with severe cognitive impairments and limitations (W ittenauer, Smith Aden, 2013). Hospitalization incurs the highest financial burden for patients with severe to moderate form of the disease. Clinical Manifestations Several signs and symptoms are associated with Alzheimers disease. Affected individuals may experience one or more of these symptoms to be diagnosed with the disease. Appropriate evaluation of the symptoms is essential for early diagnosis by medical practitioners. The symptoms often vary according to the severity of the disease i.e. mild, moderate and severe. Almost all the symptoms are related to memory and cognitions. Patients suffer from worsened ability to remember and process new information like conversations, appointments, navigations routes, etc. Impairments regarding reasoning, judgements and complex tasks such as inability to make appropriate decisions, manage finances or plan complex sequential activities. Vision is often affected in patients with Alzheimers disease causing moderate to severe visuospatial functioning impairments. Difficulty in reading, judging distances, determining colour, recognising familiar faces and objects and implementing tasks that involve some sor t of orientation are early symptoms of disease prognosis (Ismail et al., 2016). Further, behavioural changes are also extensively observed in Alzheimers patients. Mood swings, lack of interest and motivation, apathy, social withdrawal, compulsive and obsessive behaviour. Memory loss is the most common of all the symptoms and is associated with manifestation of all the other related symptoms. People are often diagnosed at mild stage of the disease which is most prominently characterised by mild cognitive impairment (Geda et al., 2013). At the initial stages it does not interfere with daily living activities but older people with the condition have higher risk of developing Alzheimers. As the disease progresses from mild to severe the brain ceases to work and the body shuts down. Diagnostic Processes Various guidelines for Alzheimers dementia and mild cognitive impairment can be used for general practice. To diagnose the disease the initial step is a medical assessment of the patient. Early diagnosis is crucial for providing appropriate treatment and intervention and restricts the prognosis of the disease as direct cure of the disease is yet to be discovered. A medical assessment should include examination of the patients family and medical history. Whether dementia runs in the family or any incidence relating to head injury can be high risk factors and might aid in early diagnosis. Physical examinations including measurement of blood pressure and other cardiovascular parameters must be performed to assess the effects of the same on progression of the diseased condition. Neurological tests like assessment of balance, sensory functions, reflexes, eye movements and other neurological functions may help in assessing the overall function ability of the patient and diagnose the diseas e. In the preclinical stage several biological and physiological changes are underway but no noticeable clinical symptoms are visible in the patient. Studies predict that the onset of this preclinical stage may begin years ever decades before any manifestation of the disease symptoms and hence diagnosis of this stage becomes somewhat difficult for medical practitioners and physicians. The diagnosis of this stage mostly depends on the identification of certain biomarkers that may signal the inception of these biological changes within the brain (Olsson et al., 2016). The most efficient biomarkers of Alzheimers disease brain imaging studies using biophysical techniques like magnetic resonance imaging (MRI), positron imaging tomography (PET) and estimation of several proteins present in the brain and cerebrospinal fluid. To assess mild and severe symptoms, established guidelines must be followed. Memory and cognitive skills, behavioural changes, degree of memory or cognitive impairment and the cause of symptoms are evaluated for such diagnosis (Hayne, Lim Donnelly, 2014). The practitioner must rule out other factors that can cause similar symptoms by thoroughly studying patient history. Parkinsons disease, depression, past strokes and other medical conditions must be considered prior to diagnosing the patient with Alzheimers disease. Treatment No drug has been formulated yet that can completely protect neurons from degenerative effects however pharmacological treatment primarily depends on inhibition of acetylcholine degradation in the nerve synapses. Acetylcholinesterase inhibitors are the only drugs that have been used to treat Alzheimers. They act by slowing down the process of degradation of neurotransmitters. Another group of drug, N-methyl D-aspartate receptor antagonist are also used that regulate the activity of glutamate and help in the proves of cell signalling. References Alzheimer's Australia | Statistics. (2017).Fightdementia.org.au. Retrieved 2 September 2017, from https://www.fightdementia.org.au/statistics Fjell, A. M., McEvoy, L., Holland, D., Dale, A. M., Walhovd, K. B., Alzheimer's Disease Neuroimaging Initiative. (2014). What is normal in normal aging? Effects of aging, amyloid and Alzheimer's disease on the cerebral cortex and the hippocampus.Progress in neurobiology,117, 20-40. Geda, Y. E., Schneider, L. S., Gitlin, L. N., Miller, D. S., Smith, G. S., Bell, J., ... Rosenberg, P. B. (2013). Neuropsychiatric symptoms in Alzheimer's disease: past progress and anticipation of the future.Alzheimer's dementia,9(5), 602-608. Hayne, D. J., Lim, S., Donnelly, P. S. (2014). Metal complexes designed to bind to amyloid- for the diagnosis and treatment of Alzheimer's disease.Chemical Society Reviews,43(19), 6701-6715. Ismail, Z., Smith, E. E., Geda, Y., Sultzer, D., Brodaty, H., Smith, G., ... Area, I. N. S. P. I. (2016). Neuropsychiatric symptoms as early manifestations of emergent dementia: provisional diagnostic criteria for mild behavioral impairment.Alzheimer's Dementia,12(2), 195-202. Olsson, B., Lautner, R., Andreasson, U., hrfelt, A., Portelius, E., Bjerke, M., ... Wu, E. (2016). CSF and blood biomarkers for the diagnosis of Alzheimer's disease: a systematic review and meta-analysis.The Lancet Neurology,15(7), 673-684. Qiu, C., Kivipelto, M., von Strauss, E. (2009). Epidemiology of Alzheimer's disease: occurrence, determinants, and strategies toward intervention.Dialogues in clinical neuroscience,11(2), 111. Sloane, P. D., Zimmerman, S., Suchindran, C., Reed, P., Wang, L., Boustani, M., Sudha, S. (2002). The public health impact of Alzheimer's disease, 20002050: potential implication of treatment advances.Annual review of public health,23(1), 213-231. Swerdlow, R. H., Burns, J. M., Khan, S. M. (2014). The Alzheimer's disease mitochondrial cascade hypothesis: progress and perspectives.Biochimica et Biophysica Acta (BBA)-Molecular Basis of Disease,1842(8), 1219-1231. Vos, S. J., Verhey, F., Frlich, L., Kornhuber, J., Wiltfang, J., Maier, W., ... Frisoni, G. B. (2015). Prevalence and prognosis of Alzheimers disease at the mild cognitive impairment stage.Brain,138(5), 1327-1338. Wimo, A., Jnsson, L., Bond, J., Prince, M., Winblad, B., International, A. D. (2013). The worldwide economic impact of dementia 2010.Alzheimer's Dementia,9(1), 1-11. Wittenauer, R., Smith, L., Aden, K. (2013) Update on 2004 Background Paper Written by Saloni Tanna, Pharm.D. MPH Background Paper,6.

Thursday, November 28, 2019

US Social Security vs. Canadian Social Security

Introduction The increasing challenge posed by an increase in the elderly population creates the need to deploy social security services. Demographic patterns on the provision of social services are estimated to increase drastically in the future.Advertising We will write a custom research paper sample on US Social Security vs. Canadian Social Security specifically for you for only $16.05 $11/page Learn More In the United States, the notable demographic trends that are likely to impose significant pressures on social security includes the anticipated retirement of the baby boomers cohort, a reduction in the fertility rates and increases in life expectancy are estimated to pose a large increase in the old-age dependency ratio (Feldstein Liebman, 2002). The main purpose of this paper is to compare and contrast the United States with the Canadian social security system. The paper provides an overview of the United States and Canadian social security system , after which the paper discusses the objective similarities and differences between the two systems. In addition the paper also provides a subjective analysis that is based on the current evaluation of the United States social security system against the Canadian system. Basing on the research, the paper provides recommendations for improving the United States social security system. Introduction to the United States Social Security system In the US, social security mainly involves the Old-Age, Survivors and Disability Insurance (OASDI) scheme that is administered by the federal government. Social Security in the United States was first adopted during 1935; subsequent amendments have resulted to the inclusion of social welfare and social insurance. Major components of the United States social security also include the Supplemental Security Income, various unemployment benefits, offering aid to the needy families, grants issued to the states by the federal government for the purpose s of Medical Assistance Programs (Medicaid), Health Insurance for the Aged and Disabled (Medicare) and the Patient Protection and Affordable Care Act (Giles, 2005). Social security in the United States is mainly financed using dedicated payroll taxes that are referred to as the Federal Insurance Contributions Act tax. Social security in the United States is largely concerned with the benefits associated with retirement, unemployment, cases of disability, death and survivorship (Hyman, 2010).Advertising Looking for research paper on public administration? Let's see if we can help you! Get your first paper with 15% OFF Learn More Social Security in the United States is considered as the largest government program in the globe that takes a significant portion of the federal budget. In addition, social security is the biggest social insurance program in the United States. It is estimated that social security in the United States has helped to keep 40 percent of people aged over 65 years out of poverty. Introduction to Canadian social security Canadian social security comprises of approximately 2.3 percent of the Gross Domestic product, the Pay-as-you-go component Canadian social security is relatively small compared to the United States. Old Age Security (OAS) program is one of the core elements of elderly income transfers in Canada. The Guaranteed Income Supplement is used to increase the income levels for aged individuals in Canada. Another important element of Canadian social security is the Canadian Pension Plan and the Quebec Pension plan, which are mainly funded by the joint monetary contributions from employers and employees. Canadians contribute 4.95 percent tax on their income from USD 3500 to USD 41000 (Orszag Diamond, 2005). Social security in Canada mainly involves the government programs that are adopted with the main objective of offering assistance to its citizens and covers diverse programs that are mostly run by the provinces. In Ca nada, the social safety net is mainly concerned with the transfer payments that are directed at low income citizens only. It does not incorporate expenditures associated with healthcare services and education (Weisbrot Baker, 2001). Similarities between the United States and Canadian security services In the US, social security denotes the funds that the individuals pay during their working life, which mainly comprises of the retirement benefits during old age. This is a similar approach under the Canadian social security that is implemented using the Canadian Pension Plan. In the United States, employees contribute 5.65 percent of their earnings towards their social security and Medicare, which is used for offering medical insurance for aged and retired people.Advertising We will write a custom research paper sample on US Social Security vs. Canadian Social Security specifically for you for only $16.05 $11/page Learn More The social security premiums in the US are capped at earnings of USD 106,800 while there is no capping of the premiums for Medicare (Hyman, 2010). Canadians contribute 4.95 percent of their total earnings towards the CPP. Socialized healthcare plan in Canada are somewhat similar to the Medicare program in the context of the United States (Orszag Diamond, 2005). Another similarity between the United States and Canadian social security systems is that they both make use of the pay-as-you-go scheme, although the United States system is relatively high compared to the Canadian system. Bo the social security systems can be considered to a hybrid between the PAYGO plan and a fully financed program (Hyman, 2010). Differences between the United States and Canadian social security system A notable difference between the two systems is the scope of coverage of social security. In this context, the Canadian social security system does not have provisions for education and healthcare expenditure, which are provided in the social security system in the United States (Giles, 2005). The second difference between the two systems is that the United States expenditures on social security are relatively higher compared to the Canadian expenditure on social security. For instance, the Old Age and Survivors comprise of 6 percent of the United States GDP, compared to Canada that allocate 4.2 percent of its GDP. In addition, Canada spends relatively twice as much as the amount that the United States spends on unemployment benefits (Hyman, 2010). Another difference between the two systems is that the CPP is a reserve fund that is invested in the market; this is contrary to the social security funds that are invested in government securities and bonds. Investing the CPP in the market resulted to 5 percent marginal difference between the returns in the United States and Canada. Evaluation of current US system against the Canadian system It is arguably evident that Canadian social security has a better establishme nt compared to the United States social security system. There is a potential that the Canada Pension Plan fund will grow since it is invested in the market, making significant contributions towards its future sustainability compared to the United States social security funds that are invested in government bonds. Another reason that contributes to the effectiveness of the Canadian pension Plan when compared with the US social security system is that the benefits of the CPP are relatively lower compared to the benefits of the United States social security. The generosity of the United States social security questions its sustainability in meeting the future demands posed by the aging population (Weisbrot Baker, 2001).Advertising Looking for research paper on public administration? Let's see if we can help you! Get your first paper with 15% OFF Learn More Recommendations to improve the United States social security system Improving the efficiency of the United States social security requires the reinforcement of insurance and financing. With regard to insurance, it is important to maintain an appropriate balance in terms of social and individual responsibility. With regard to financing, establishing a suitable balance between pre-retirement funding and the use of the common PAYGO method will serve to address the potential challenges imposed by the demographic trends in the United States. References Feldstein, M., Liebman, J. (2002). The Distributional Aspects of Social Security and Social Security Reform. Chicago: University of Chicago Press. Giles, C. (2005). US social security is among least generous. Web. Hyman, D. (2010). Public Finance: A Contemporary Application of Theory to Policy. New York: Cengage Learning. Orszag, P., Diamond, P. (2005). Saving Social Security-A Balanced Approach. Washington DC: Brookings Institution Pres s. Weisbrot, M., Baker, D. (2001). Social Security: The Phony Crisis. Chicago: University of Chicago Press. This research paper on US Social Security vs. Canadian Social Security was written and submitted by user Silas Richards to help you with your own studies. You are free to use it for research and reference purposes in order to write your own paper; however, you must cite it accordingly. You can donate your paper here.

Monday, November 25, 2019

Free Essays on Under The Autism Umbrella

Under the Autism Umbrella To many people, the very name "autism" conjures the image of a child, in isolation, banging his head against the wall. The word "autism" derives from the Greek word "autos," meaning self. The name arose because of the person’s trouble in communicating with others. Since it was first identified in 1943, autism has come to be seen as a spectrum disorder, that is, a disease that can range from mild to severe. As recently as a decade ago, the incidence of autism was thought to be only one in 5,000. Recent studies now suggest that the number is much higher: About one or two in 2,000 people, says Fred Volkmar, MD, professor of child psychiatry at Yale University. It is impossible to tell whether the numbers are higher because there's an epidemic of autism or because it is simply being detected more. Autism is a profound developmental disability that compromises a person's ability to relate to other people. Language lags are a hallmark of the disorder, which is usually detected between ages of 1 and 3. Autistic children often are attached to routines, fixate on specific subjects or toys, and overreact to stimuli. Sometimes they engage in repetitive movements such as head banging or arm flapping. It is four times more common in boys than girls. But that doesn't mean autistic children are stupid, or doomed to a hopeless existence. Some, in fact, are quite intelligent. Medical experts say that it is a spectrum disorder because IQs of autistic children can range from mental retardation to genius level. On the high end are children with Asperger syndrome, a condition in which verbal skills are generally quite good, but qualities known as language pragmatics, such as tone of voice and facial expressions are compromised. Children with Asperger syndrome usually have poor social skills due to their inability to read and transmit nonverbal cues accurately. They also have problems understanding how other people feel. Al... Free Essays on Under The Autism Umbrella Free Essays on Under The Autism Umbrella Under the Autism Umbrella To many people, the very name "autism" conjures the image of a child, in isolation, banging his head against the wall. The word "autism" derives from the Greek word "autos," meaning self. The name arose because of the person’s trouble in communicating with others. Since it was first identified in 1943, autism has come to be seen as a spectrum disorder, that is, a disease that can range from mild to severe. As recently as a decade ago, the incidence of autism was thought to be only one in 5,000. Recent studies now suggest that the number is much higher: About one or two in 2,000 people, says Fred Volkmar, MD, professor of child psychiatry at Yale University. It is impossible to tell whether the numbers are higher because there's an epidemic of autism or because it is simply being detected more. Autism is a profound developmental disability that compromises a person's ability to relate to other people. Language lags are a hallmark of the disorder, which is usually detected between ages of 1 and 3. Autistic children often are attached to routines, fixate on specific subjects or toys, and overreact to stimuli. Sometimes they engage in repetitive movements such as head banging or arm flapping. It is four times more common in boys than girls. But that doesn't mean autistic children are stupid, or doomed to a hopeless existence. Some, in fact, are quite intelligent. Medical experts say that it is a spectrum disorder because IQs of autistic children can range from mental retardation to genius level. On the high end are children with Asperger syndrome, a condition in which verbal skills are generally quite good, but qualities known as language pragmatics, such as tone of voice and facial expressions are compromised. Children with Asperger syndrome usually have poor social skills due to their inability to read and transmit nonverbal cues accurately. They also have problems understanding how other people feel. Al...

Thursday, November 21, 2019

Analyzing and Writing Cases Article Example | Topics and Well Written Essays - 2000 words

Analyzing and Writing Cases - Article Example In general, the selection of strategy consists of a number of approaches. In organizational context, the choice of the approach relies on several factors such as the current situation of the company, the resources of the company, competencies and policies of the organization, risk tolerance potential, internal clashes, extent of external reliance and expected competitive response among others. With due consideration to these factors, the alternative strategies that have been recommended for Edward Jones comprise of a combination of expansion as well as product development strategy. The present day business environment is altering continuously due to the inclusion of new entrants, the development of substitutes and enhanced performance of competitors and so on. As a result, Edward Jones might require discovering ways to mitigate threats from rivals and ascertain that is benefits from its persistent customer loyalty. Some of the organizational strategies to consolidate the business cou ld be enhancing the product or service quality, building better associations with customers so as to understand and meet their needs efficiently resulting in augmented customer loyalty. External Assessment The financial services industry functions on the principle of the trade-off between risk and return. The retail brokerage industry had made it possible for individual investors to invest in various financial securities, for instance stocks and bonds. This industry has evolved over the years and many factors such as increase in the worth of stock market, advent of technology and internet, increasing need of individuals to save for their future as a result of rise in the life expectancy level and rising expenses have contributed towards its rapid growth. Due to the rising competition in the market, diversification of investor needs and the increasing expectations of the customers, the retail brokerage industry primarily focuses on customer service. There exists a high level of conve rgence in the retail brokerage industry, which permits the companies to bundle their products as well as offer discounts. The rationale behind the convergence is that it is more cost-effective to cross-sell products. Moreover, the consolidation of the products results in shared information technology and elevated switching costs and acts as a major opportunity for the industry participants. The retail brokerage industry is highly correlated with the stock exchange market. The increase in the value of stock markets across the world has provided the industry with immense growth opportunity and would continue to do so. With the rise in the number of individual investors looking forward to make investments in various types of financial assets, the industry has huge growth potential. Though the industry has immense growth potential, the risky business practices that are the part of financial services organizations could bring about a stock market slump and adversely affect the retail bro kerage industry. This is a major threat that the retail brokerage industry has to encounter. The instances of focus on short term benefits is more in the financial sector, as the managers of such organizations want to please their stock holders by providing them superior immediate returns. However, in doing so, more often than not the managers ignore the long-term sustainable

Wednesday, November 20, 2019

Managerial Decision Making Research Paper Example | Topics and Well Written Essays - 750 words

Managerial Decision Making - Research Paper Example Managerial Decision Making Decision making is the procedure of identifying opportunities, problems, and solutions to these problems or opportunities. Making decisions involves effort, before and after the actual choice. Decision making occurs at all levels of a business. Frequently, the board of directors makes strategic decisions regarding investment and the course of future growth of a company, among others. Managers can make the tactical decisions regarding how their own department can contribute most effectively to the overall company objectives. Ordinary employees are also expected to make decisions regarding conduct of their own responsibilities, responses to customers and enhancements to business practice. There are two types of decision making; programmed and non-programmed decision making (Richard & Dorothy, 2010). Programmed Decision Making According to Andrew (2011), programmed decisions are made for routine, recurring for well-structured situations using predetermined dec ision rules. These rules normally apply prior experience or practical knowledge about what works in a certain situation. Programmed decisions are resolutions that have been made numerous times in the past; managers have developed guidelines or rules to be applied when certain situations are anticipated to occur. According to Richard & Dorothy (2010), programmed decisions are made to ensure smooth running of the organizational activities. For example, McDonald’s Corporation inventory manager will decide to order certain goods when the company is running out of stock. Few programmed decisions are structured to eliminate individual judgment. In programmed decision making, there are no errors in the decisions since it is routine, and managers normally have the information needed; to create guidelines and rules to be followed by others. Lower level managers are essentially confronted by repetitive and familiar problems; therefore, they typically rely on programmed decisions, such as standard operating procedures. In most cases, lower level managers deal with well-structured problems. If lower level managers come across ill structured problems, they pass on these problems to senior managers in the organizational hierarchy. Similarly, senior managers pass down well structured problems to their subordinates so that they handle more problematic issues. In programmed decision making there is little threat and ambiguity involved, the decision maker is certain on the consequences of his or her actions, pertaining to a certain issue (Andrew, 2011). Non-programmed Decision Making Non-programmed decisio

Monday, November 18, 2019

Highlight the impact of containerisation on an international supply Essay

Highlight the impact of containerisation on an international supply chain - Essay Example This staggering figure constitutes of 15 percent of the global vehicle markets. The company generated net income of $2.8 billion on over $193 billion in revenues (Alden et al, 2006). General Motors procurement strategy General motors’ (GM’s) business operations are based on a sound procurement practice and basic business integrity. Officials responsible for procurement and supply chain make their procurement decisions solely on the basis of the credibility of the suppliers that offer GM the best value for goods and services that they require. They primarily avoid any actions that indicate that their purchasing decisions are improper or irrelevant consideration whether illegal, such as bribe or kickback, or technically legal such as favours, free entertainment, personal friendship or gifts. The global purchasing and supply chain of GM holds the responsibility of procuring all goods and services that are required by the company and its joint venture and alliance partners spread over all four business regions of the world. This operation involves the procurement of parts used in the production and manufacturing of vehicles as well as products and services that are utilised for the purpose of providing support to the development and production of those vehicles. Following this strategy has not only helped boost GM’s production all over the world, but has also helped their suppliers to do business in unprecedented volumes thereby providing them with an opportunity to expand their own operations across the world (GM, 2010). The new system of procurement and supply chain management in GM has been termed as â€Å"Centralized Decentralization† by the vice president. The basic idea behind this system as has been explained by the vice president is to centralize the procurement of individual components and materials in order to leverage the company’s buying power and scale (Supply chain digest, 2008). According to Ageshin (2001), General Motors has various characteristics that make it an ideal example of an e-procurement strategy following company and a great example of how e-procurement is reshaping U.S. manufacturing. The company has the ability to increase the volume of its sales through its e-procurement system up to $300billion-$500billion per year. This has always been a primary strategy adopted by the company in order to generate further cost savings associated with purchasing across the whole supply chain. The fact that, GM was very familiar with the advantages of electronic data interchanges with its suppliers and because of its dominant position in the supply chain industry, the company adopted e-procurement system very early. GM started pursuing the idea of e-procurement as early as 1999 with the help of its technology partners i2 Technologies and Commerce One who created a B2B trading community called TradeXchange. This e-procurement system that the company adopted led to quicker information flows and ex tensive information sharing across the supply chain. This has resulted in the significant improvement in the quality of planning and forecasting for the company and its suppliers thereby boosting their businesses. The Web-based form of e-procurement has increased product customization and developed build to-order capabilities at GM. General Motors’

Friday, November 15, 2019

Politics Essays Fundamental Principles Of Legitimate Power

Politics Essays Fundamental Principles Of Legitimate Power There are various theories about what can make power legitimate. Doyou think that one theory is more convincing than others? To understand thefundamental principles of legitimate power and governance one must look at theperiod surrounding the Enlightenment because this is the time when theindividual became an important entity, no longer was the individual part of aclass on a hierarchical structure, with power relating to that class. Thenatural rights theorists aim was to show that man was born in a state ofnature, and given the right to do as he/she wished, but this was sacrificed tothe governance of the land, i.e. that the rational man would give up thestate of freedom, for the security and safety of law, governance andsovereignty. Locke, said instead of giving up the right to do absolutely anythingto the sovereign entity, the rational man would put these rights in the handsof a government that holds the good of the people as supreme. Locke did notbelieve that man gives up all these natural rights, but each person retainedrights that were regulated by a political government, to ensure a person wouldnot use their rights in a way that would harm the rights of others. Lockesversion of rights was one of the first models of inherent rightsto life, liberty, freedom and property, where the king was there at the will ofthe people and benevolent in nature. Theinfluence of John-Jacques Rousseau is also important, although not strictlyspeaking a natural law theorist, in the sense of earlier theorists. The mostimportant difference that Rousseau discussed in his works was that governmentand reason has not protected man but enslaved man, whereas in thestate of nature these rights were upheld in a paradisiacal state. One ofRousseaus most interesting critiques of government and law was in the SocialContract where man was originally free but in society everywhere inchains. Therefore he believed instead of giving up ones freedom to agoverning body, it needs to be reclaimed by man but this did not meanreclaiming the paradise of Rousseaus state of nature. Instead these rightsshould be inherent to each man and that the government created is not only forthe good of the people but should be determined by the will of the people.Rousseau believed people should bepart of the regulation of the government and law; otherwise the government thatis essentia lly corrupt will take away these rights. Popular involvement makesit impossible for these rights to be taken away by the government. There wasan assumption of equality between men and basis rights to life, liberty,freedom, and protection from the corruption of absolute government (i.e. rightsto freedom of speech and assembly) and the right to a fair trial and independentCourt of law. This argument stems from the authors of the AmericanConstitution where the rights embodied in the text were self-evident becauseall men were created equal and given certain inalienable rights, which areafforded to all persons of the globe, state borders have no impact on theserights. The writers claimed these rights came from God. Other theorists haveargued we have these rights merely because we are human. This argument is stillone used in the 20th/21st Century as it is the easiest topass off, however there is no real moral justification for upholding theserights, therefore how can one say we must keeps these rights in the face of abreach or dissolution of them. Hobbes state of nature sets up that; Men by nature [are]equal: Nature hath made men so equal, in faculties of the body, and mind Foras to the strength of body, the weakest has strength enough to kill thestrongest, either by secret, machination, or by confederacy with others, thatare in the same danger with himself; henceall are equal in fear of death. Therefore if this fear was set forth by themonarch then this first law of nature legitimizes the citizens to revolt andset up a form of governance that ensures this equality and that their basicrights are upheld. Therefore if the citizens of Hobbes state are able to gettogether to give the power of law and governance to a single individual theybelieve will uphold the common good; then in the same coalition they can deposethis individual if in fact their powers of governance and over the law aremisused. This state of nature is hypothetical in order to provide a theoryjustify the fair governance of a small section of society, or as Hobbes prefersa monarch. It is the equality of fear, the individuals right to everything inaddition to subsequent laws of nature which provides the conditions for asocial contract to ensure security and equality of mankind. There are some problemswith Hobbes social contract which is giving the power of rule and governanceto a single individual; this is arguably giving this individual uncheckedpower. Therefore if every man has the right to everything and then ifthe state of natures equality is no longer the case because the power of lawlays in an individuals hands where this individual has the wants and desiresto obtain everything. Hence there will be a tyrannical government, rather thana government for the common good. Utilitarianismis not a theory of individual rights, instead it views that the good of thecommunity was a more important aim for the law and government ruled by thepeople. Theorists such as Edmund Burke believed that rights werenatural, including life, liberty and freedom but this theory was in theabstract, therefore they should be given by society for the good of its people,because these rights cannot be universal otherwise there is no place forcultural diversity. Burke is one of the first theorists with the culturalrelativism argument; the critics of universaljustice have further advanced this in the 20th and 21stcenturies. Burkes move to reject universalism was the first chip in theseinherent rights that ensured legitimate power; how canrights be inherent if they not available for everyone, because a culture deniesthem. Jeremy Bentham advanced this. His theory held that were no naturalrights the government for the good of society a form of utilitarianism,afforded rights. Therefor e Benthamsrights were legal rights where one can do whatever one wants as long as the lawdoes not prohibit it i.e., rights are not stemming from the individual but thestates and the powers of governance (Positivism). The problem with positivismor this early form of rights from utility is that the law/governance are thebasis of rights and because there is no greater principle of just andlegitimate governance. Themodel of Marxism states that it does not regard the individual as having anyhuman rights, instead it is for the state to set theneeds of the individuals, i.e., it is not the good of the individual that thestate upholds but the good and the needs of the state. Marx considered law,justice, freedom and democracy as ideas and concepts that are determined byhistorical and sociological circumstances and irrelevant. Instead a personsessence was the potential to use ones ability to the fullest and satisfy onesneeds, thereforepromoting fundamental rights as rights of well-being and satisfaction of theindividual. These rights would involve social and economic rights, which isthe only way to ensure legitimate power and justice. Marxs vision turned outto be idealistic and failed in reality. Themost legitimate version of power and governance seems to be a mixture oftraditional utilitarianism that affords a method of human rights. Modern utilitarian theorists have extended the theory of Bentham,but have put it in more modern terms. Instead of maximising the pleasuresand desires of the individual the government would be maximising thegeneral welfare of individuals therefore minimising frustration of wantsand preferences. Therefore what one cansee is that the governing bodies must put the general welfare first, yetminimise the individuals needs therefore causing a conflict of rightsbetween what is in the name of the society and what the individual wants. Theproblems with this theory is it is socially constructed, there is no autonomyof being and no argument for universal rights that transcend all cultures andreligions, therefore falling short of what is needed for an all-encompassinghuman rights theory, as the general welfare can be different fordiffering cultures. Rawls i n his thesis for engendering human rights statesthat justice is the prime basis ofall government and to ensure justice human rights are the obvious means and endto ensure justice is fulfilled. Rawls theory is based on a few key ideas,which are the rights and duties of government/institution of society andthe burdens and benefits of citizens co-operating. Rawls bases histheory that each individual has an inherent and inviolable being set in justice- this being cannot be overridden for the welfare of the society. This theorydoes not fall foul to the arguments against modern utilitarianism. Rawls doesuse the social contract fiction of Hobbes and Locke, however the basis ofmoving from ignorance (state of nature) is reason and this reason set up onprinciples of justice that his social contract is based upon. These principlesare; 1) that each person has basic rights and liberties in accordance withfreedom; and 2) there is distributive justice, where inequalities arerestrained by the great est benefit of least advantaged and each personhas the condition of fair equality of opportunity. These principlescannot be derogated for the public good and liberty is the supreme principle.Rawls theory is very important when looking at human rights theories becauseit begins to tackle the universality of human rights based on justice, as wellas the inequalities apparent in society. The theory does have flaws but it oneof the more comprehensive theories setting up basis rights and freedoms andensuring legitimate power because it protects the individuals democraticrights, because it is a more complex analysis of the nation-state and asAndrews and Sayward argue: The modern Western approach to political legitimacy links it withthe opportunities for democratic participation, so that democracy is now seenas a necessary condition of political legitimacy In theories of politicallegitimacy a stereotype of a domestic state with its own domestic populationcan easily emerge. Yet the actual histories of state are much more complicatedthan that. Bibliography: Andrews Saward, 2005, LivingPolitical Ideas, Edinburgh University Press Edmund Burke, Reflections onthe Revolution in France, (Hackett,Indianapolis, 1987) ed. J.G.A. Pocock Thomas Hobbes, Leviathan, Ofthe First and Second Natural Laws, and of Contracts excerpts from Ed.Joseph Losco Leonard Williams, Political Theory: Classical Writings,Contemporary Views, (St. Martins Press, New York, 1992) Peter Jones, Rights: Issues inPolitical Theory, (Palgrave, Basingstoke, 1994) John Locke, The Second Treatise ofGovernment , excerpts from Ed. Joseph Losco Leonard Williams, PoliticalTheory: Classical Writings, Contemporary Views, (St. Martins Press, NewYork, 1992) Ed. Joseph Losco Leonard Williams,Political Theory: Classical Writings, Contemporary Views, (St. MartinsPress, New York, 1992) Marx Engels, 1952 edition, TheCommunist Manifesto, Moscow, Progress Publishers John-Jacques Rousseau, SocialContract, Discourse on the Origins and Foundations of Inequality AmongMen excerpts from Ed. Joseph Losco Leonard Williams, PoliticalTheory: Classical Writings, Contemporary Views, (St. Martins Press, NewYork, 1992) Shestack, The PhilosophicalFoundations of Human Rights from Ed. Janusz Symonides, Human Rights:Concepts and Standards, (UNESCO Publishing, Aldershot, 2000) John Rawls, The Theory of Justice (OxfordUniversity Press, Oxford, 1971)

Wednesday, November 13, 2019

Essay --

Introduction The Hedge fund industry is surrounded by many debate and controversy. Lack of excessive returns, unclear impact, oversight on the market are all subjects of concerns for the public and market participants. The hedge fund industry worries small investors and financial professional who do not know how to accurately assess the risks associated with hedge funds. Hedge funds are mainly operating like mutual funds but not the managers. Oversight and low regulation allow them to not make public information on their profits and losses or investment strategies. Hedge funds rely a lot on volumes to achieve profits. Systemic risk became a major part of the debate since LTCM in 1998. Story of LTCM Background Long Term Capital Management - LTCM - was a hedge fund that was established in 1994 by John Meriwether who was a successful bond trader at Salomon Brothers. Meriwether was one of the first on Wall Street who hired professors and academics who applied models based on financial theories to trading. This team demonstrated an ability to precisely calculate risk and generated amazing returns (Goldberg, M., 2012). The partners of LTCM included a professor from Harvard University, Nobel Price-winning economists, a former vice president of the Board of Governors of the Federal Reserve, and other successful bond traders. This group of traders and academics attracted about $1.3 billion from different institutional clients (Goldberg, M., 2012). Investors were not allowed to take any money out for three years and paid $10 million to get into the fund. Annual return in 1995 was 42.8% after management took 27% off the top in fees. In 1997 LTCM successfully hedged most of the risk from the Asian currency crisis by ... ...ge (Goldberg, 2012). Should the Fed have intervened? In order to save the U.S. banking system, the President of the Federal Reserve Bank of New York William McDonough convinced 15 banks to bail out LTCM with $3.5 billion, in return for a 90% ownership of the fund. Also, the Fed started lowering the Fed funds rate as a assurance to investors that the Fed would do whatever it took to support the U.S. economy. Without that direct interference, the entire financial system was threatened with a collapse (Amadeo, 2012). However, the Fed brokered and intervened a better deal for the LTCM managers and shareholders. This was the precedent for the Federal Reserve's bailout role with AIG, Bear Stearns, Fannie Mae and Freddie Mac during the financial crisis. Once financial companies realized that the Fed would bail them out, they were more willing to take risks (Amadeo, 2012).