Saturday, August 31, 2019
Body Language in the Workplace
The book deals with how Body Language affects your business career and illustrates you step-by-step examples on the effect of Body Language and how to use it in favour of your success. The authorââ¬â¢s Allan & Barbara Pease both come from the Business World and established this book together, developing specific techniques trough personal experiences. Allan Pease made his first personal experience with the effect of Body Language as a Teenage Boy, working as a door-to-door salesman for a rubber sponge company. He quickly learned, by watching the peopleââ¬â¢s Body Language on what they were thinking and then found a way to persuade them to willingly buy the rubber sponges without them noticing it. He later worked as a successful salesman for an Australian Life Insurance Company. The Book is a ââ¬Å"How toâ⬠Book and designated to prospective and current Business People. The author adresses the reader directly, which gives a more personal approach to the Book. The authorââ¬â¢s style is very informal, the quality of writing is very clear and original which makes it easier for the reader to follow and not get bored. It suits the intended audience. Body The book contains seven Chapters, each chapter representing a different situation the reader is confronted with in his/her everyday Business Life, making it easier for the reader to identifiy himself/herself in the examples and adapting them more easely. Each example is given an additional illustration or specific situation so the reader is directly involved in the situation and understands better what the author is refering to. Furthermore, there are 14 Business rules spread out over the book. More precisely, each chapter contains 2 Business Rules that have to be remembered. Moreover, the book is divided into two different categories: ââ¬ËA woman in Businessââ¬â¢ and ââ¬ËA man in Businessââ¬â¢. The first chapter has the Title ââ¬Å" Interviews: How to Get the Jobâ⬠¦Every Time; Are you Sitting Comfortably ? How to Sit, Where and Why ââ¬Å" . It discusses how a First Impression is made and how to work on it. The first impression is more important than what is written on your Curriculum Vitae. The Interviewer will remember your appearance rather than what College you attended. This chapter also tells you what to do and what not to do in an interview, by example not to wear a Goatee because it represents Satan and people will most likely repel people subconciously. Furthermore, the chapter sow the reader how seating arrangements can change your whole position and what type of Table is most suitable for a Conference. The second chapter has the Title ââ¬Å" How to Take Your Career in Your Hands: the Art of Handshaking, Networking and Surviving the Office Partyâ⬠. In this chapter the reader learns what a Handshake can do to his/her professional career, in other words : ââ¬Å"A good handshake can be the difference between a career boost and career suicideâ⬠. Furthemore, the reader learns how to behave at an Office Party and how to boost your popularity. The third chapter, ââ¬Å"Persuasive Presentationsâ⬠shows the reader how to behave during a Presentation, taking a close look at the audience and work with Power Point Presentations. In this chapter the reader learns that the audience sitting on their left is more likely to be attentive and respondend to Jokes than the audience sitting on their right. The fourth chapter, ââ¬Å"Mastering Meetings and Perfecting Phone- and Video-Conferencingâ⬠tells the reader how to behave during a Video Conference, watching his/her Body Language and adapting the speed of Voice and sound to the other person during Phone- and Video-Conferences. The fifth chapter, ââ¬Å"The Best-Kept Secrets of Successful Businesspeopleâ⬠demonstrates the reader how to make himself/herself ââ¬Å"tallerâ⬠in the Businessworld, since it is proven that taller people get higher positions and therefor higher salaries. Moreover, this chapter shows the reader how to use eye contact and moderate smiling in specific situations. The sixth chapter, ââ¬Å"Globalisation: The Perils and Pitfallsâ⬠shows the reader to take into consideration foreign Body-Language, especially in the Business World where people travel a lot. The authors give the most common example of Japan,where certain aspecst of Body Language are perceived differently than in Europe and how misleading Body Language can ruin a Business Plan. For example, when Japanese people nod their head while you are speaking does not mean that they are in complete agreement with you, but they are rather telling you that they are listening and that you should keep on talking. The seventh and last chapter, ââ¬Å"Office Politics, Power-Players, Office Romances and Other Ticking Bombsâ⬠gives the reader examples on how to avoid intimidation by superiors,seeing whoââ¬â¢s surfing on the internet and whoââ¬â¢s really working at home, how to spot and ââ¬Å"office romanceâ⬠and most importantly how to deal with stress. Conclusion The Book gives a clear and specific Guideline on ââ¬Å"How toâ⬠Behave in different situations , but also shows the reader the Consequences of the mistakes he/she might be making without being conscious about them. The book is easy to follow and would rather be used by people preparing for an interview. In my opinion, it is very interesting to see how such little things that are thought to be unnoticed can have such a big impact on your professional career. Personally, I encountered the same experience as the Author, Allan Pease working as a ââ¬Å"sales person ââ¬Å" for an NGO, but instead of doing door-to-door I had to accost the people on the street. Opening your arms to the person and showing your palms is more likely to make people stop and listen to you than moving towards them, arms crossed. It was very interesting for me to find myself in certain aspects of the book and and learn ways to improve your own appearance. I would definetly recommend this book to any person who is looking for a Job or changing Workplace and/or preparing himself/herself for a presentation.
Friday, August 30, 2019
Living In Two Worlds Essay
Living in two different cultures has its benefits an challenges. Although i was born in America, my parents emigrated to the u.s from mexico. They were only 18 years old when they stepped there first foot in the U.S The customs, traditions, and culture are almost the same as an American, but they are by far different. When my parents cane to the u.s they continued to perform their customs and hold on to some of their beliefs and as well there traditions. this has influences me in several ways. Although my family and I live in the u.s we follow the customs and traditions of my culture. I have learned two languages, spanish and english. Spanish was my first primary language, then english came along in 3rd grade. I eat variety of foods and celebrate different holidays that have to do with my religion and culture. For example, we celebrate cinco de mayo and ââ¬Å"La Independencia De Mexicoâ⬠Mexico independence Day and for our food tradition we make tamales, posole, birria, and mole. It is also a benefit because i get to celebrate with my family and have different foods and i get to learn new things about my culture. Family is really important in our culture; thats why we create family reunion twice a year. In our family we always look for each other, thatââ¬â¢s one custom my parents taught me. Itââ¬â¢s really fun when i get to interact with people that comes from the same culture as mines; in other words ââ¬Å"Mi Genteâ⬠(my people). Sharing my culture with my friends or to other people is important to me. I feel that people should have a good understanding of who i am and what my culture is like. Even though this can be a major disadvantage there are many setbacks to being part of two cultures. In my culture for being the oldest from your brothers or sisters you have to look out for them and always set an example for them. They always come first and then comes yourself. Since my parents came from mexico the only language they know is spanish. They never tend to lear english. Therefor spanish became my primary language. It brought me many disadvantages in school, because i started learning english in 3rd grade. Thatââ¬â¢s why now in english class i some times have trouble writing it. In conclusion, living in two cultures is amazing. Everyday i enjoy my culture, i breath it, live it, and i love it. I have learned so many things about my culture, where i come from, but i still have plenty much to learn from it.
Thursday, August 29, 2019
Strategic Management Master Essay Example | Topics and Well Written Essays - 2000 words
Strategic Management Master - Essay Example The political environment of Tesco includes factors relating laws, government agencies and any other pressure groups that can influence Tesco. Fair trading is becoming increasingly important in UK's business environment. For instance, 15000 African slaves working in African cocoa fields is a big concern of chocolate consumers in UK; likewise there are fair-trade factors that influence Tesco as well. Hence it comes under the scrutiny and keen observation of the Office of Fair Trading for applying monophony in agricultural markets. Community organizations in the UK have put in efforts to boycott supermarkets and large confectionary manufactures and supports small scale retailers in order to avoid their use of dominant power by dictating favourable terms and regulations on suppliers. Increased availability of credit is an important economic factor in the UK business environment of Tesco because the availability of credit itself provides financial confidence to the consumer and as a result it leads to the growth of premium foods and less demand on economy products. Large grocery retailers in UK moved in to non food retailing so as to improve margins in the highly competitive market. The UK market is largely affected by negative inflation which is driven by 'Wal-Mart effect'; that is every day law price strategy. Social People like shopping well if they get everything under a single roof. So, the convenient shopping especially 'everything under one roof strategy' is very important in Tesco's business environment. As large scale retailers cut labours by providing self service facilities it consequently local communities are haemorrhaging quantities of meaningful skilled jobs. Confectionary market is largely being affected by 'small treat' trend which in fact leads consumers to select small chocolates and small food products instead of large meals (JESS HALLIDAY -2008). Consumers always seem to be 'time poor' and hence they tend towards choosing small treat and small food items that they can have on the go. These factors as well largely affect Tesco's market. Technological The growing use of electronic data interchange, barcode readers, credit or debit card reader equipments, laser and self scanning and other points of sale tools has become a feature of recent innovation by retailers. The sophisticated technology used in store card system also becomes significant in the UK market. Online shopping and other new trends in retailing like home delivery services, after sales services and home shopping play vital role in consumer satisfaction and hence it is a significant factor that influences Tesco' market. Environmental The environmental issues like pollution and green house gas emission have gained attention from public part. Public always concerns environmental pollution and emissions of green house gas that causes ozone depletion and finally global warming. All the retailers have taken this factor to be a major concern and have taken actions so as to avoid any further consequences. There are other factors like packaging food, cleanliness,
Wednesday, August 28, 2019
Human Cloning Philosophy by Aristotle Essay Example | Topics and Well Written Essays - 500 words
Human Cloning Philosophy by Aristotle - Essay Example In this paper, we shall highlight the ideas and philosophical concepts of Aristotle about human cloning practices. When in Switzerland the first clone of sheep that was named Dolly made many individuals consider it against human ethics and they believed that such practices contribute to devaluing the natural processes of childbirth. Aristotle has opposed practices of human cloning and supported his thoughts with the claims of psychological influences of human cloning (Aristotle). He explained that babies, who are produced through artificial means by utilizing cells of a single parent, remained deprived of the love of both parents. When they see other children around them with mother and father both they become a victim of complex and depression (Aristotle). Additionally, he highlighted that there is a risky situation for the mother who gives her cells for artificial reproduction process because the procedure of taking out cells from mother`s body is dangerous for her health as well as for the embryo, which is used to making a genetically identical copy of the mother (Aristotle). Moreover, human c loning goes against the natural system and authority of childbirth that has been given to men and women by the God within the boundaries of the legal relationship. However, an advanced system of making human clones or reproduction of test tube babies has degraded rights of men and women and has interrupted the God made procedures of baby production (Aristotle). Aristotle has also supported such claims because he has presented his philosophy that totally disagrees with the ethical nature of human cloning. His philosophy explicitly highlights that human cloning is unethical because unfair and painful means are used to give birth to the baby (Aristotle). He considered that methods of human cloning are bad and it is evident from the first experiment of sheep cloning, which employed more than a hundred scientists to work day andà night in order to make an experiment of cloning successful.Ã
Tuesday, August 27, 2019
Rhetorical analysis Essay Example | Topics and Well Written Essays - 750 words - 8
Rhetorical analysis - Essay Example The dealer has ample amount of knowledge and information regarding different cars that are being sold by the company. The context in which this article was created is to inform the people of Tucson about the dealer of Porsche in the region of Tucson and the aim of the dealer within the context is to increase their sales. The aim of informing as well as persuading consumers to purchase one of the cars being offered by the dealer has been quite effectively attained by exhibiting his/her creditability, by tapping into the customerââ¬â¢s emotions of desire to live a luxurious life and their desire to experience freedom and the author has even used logical reasoning such as reviews from the customers. The authors of the website of Porsche of Tucson have quite effectively utilized the persuasion technique of ethical appeal in order to attract and persuade the customers to buy one of the cars being sold on the website. The author of the website has tremendous amount of credibility as the author of the website is the dealer working for the company of Porsche. The dealer is well informed about different models of cars being sold on the website and the benefits and drawbacks associated with these cars. They obtain this information directly from the company and therefore are credible enough in the eyes of the audience for providing them with the information that is published on the website. Other than ethos, the author has quite effectively used the persuasion technique of emotional appeals or pathos to persuade customers into purchasing the offerings of Porsche of Tucson. The author of the website has appealed to various emotions of the consumers in order to persuade the audience. They have tapped into the emotional feeling of satisfaction and happiness in order to attract the audience and mote them to purchase one of the cars. For example: the author has stated that by purchasing one of the
Monday, August 26, 2019
Fuel Prices Statistics Project Example | Topics and Well Written Essays - 500 words
Fuel Prices - Statistics Project Example The mean price for regular unleaded gasoline in the US is $1.91/gallon, with a standard deviation of $0.17/gallon and a median of $1.86/gallon. Most of the state prices are distributed around the mean, except the outlier cases- Alaska ($2.51), California ($2.25) and Hawaii ($2.45). Without these three states, the standard deviation has a much lower value, and the normal-distribution curve is less spread out. Alaska has the highest state fuel price in the US- $0.60/gallon more than the mean price. The small demand-supply market, inefficient refineries, and lack of competition have kept the gas retail prices at a consistent high (Loy, ââ¬Å"State begins inquiry into higher gas pricesâ⬠). In California, a combination of unregulated refineries, low demand and difficulty in transportation has led to an increase in the gas prices (ââ¬Å"Record high gasoline prices in California but relief may be in sightâ⬠). Hawaii is the most oil-dependent state in the nation, with more than 90 percent of its energy coming from imported oil. Being a tourist destination, the states economy is also extremely sensitive to global oil prices. Due to these factors, the cost of gas in Hawaii has also shot up in recent times (Song, ââ¬Å"Gas Prices In Hawaii, California Hit $4â⬠). The two fuel prices show a fairly strong linear correlation, with the diesel and premium unleaded gasoline prices varying proportionally. Most states form a single clustered group. The only anomalous points are for the states of California ($2.33, $2.43), and Nevada ($2.26, $2.40) and Washington ($2.40, $2.36). In these states, the prices of premium unleaded gasoline are higher than diesel prices by an average $0.09. The two outlier cases ââ¬â though distributed about the linear regression line ââ¬â have exceptionally high cost of both diesel and premium unleaded gasoline. The dependency of the two variables is still proportional (thus, these are not anomalous points), but the overall price
Sunday, August 25, 2019
Science Standards Essay Example | Topics and Well Written Essays - 500 words
Science Standards - Essay Example This is the relationship that is also seen between social studies and English, where students can implement English into global essays and so on. However, it is important for them to be able to cross link their studies in math and science because they are so interwoven into each other curriculums. One of the first methods from the math standards that I would immediately adopt is the use of technology to help student learn. Science is another technical learning area, and be incorporating as much technology as possible teachers can allow students to work hands on certain areas that they may not otherwise be able to understand completely. One of these pieces of technology would be the scientific calculator, which is used quite a bit during math curriculum. Using this tool in science class helps students take the calculator technology they already know from math class and much more easily implement it into science class. The other technological part of the math curriculum that I believe would also help students in science class would be the use of computers. The math standards have students starting in computer as low as the elementary grades, which are shown to greatly benefit student achievement.
Saturday, August 24, 2019
The 21st Century Lifestyle in G20 Countries is Bad for Your Health Article - 1
The 21st Century Lifestyle in G20 Countries is Bad for Your Health - Article Example stress upon maternal health and also include eradication of severe poverty and hunger, universal primary education (helps in creating awareness through the basis lessons regarding health), Combat HIV, Malaria and other ailments, ensure a sustainable ambience and building global alliances or partnerships for overall development. Emerging economies of the world like China, India, Brazil and South Africa still depend on assistance from foreign developed nations in order to meet the health needs of its people. The major health concern for the G-20 countries in meeting the Millennium Development Goals is to combat infectious diseases like AIDS and malaria (Robertson, 2010). Leaving aside Brazil, all other G-20 countries have significantly failed in curbing the spread of AIDS. Moreover apart from the spread of AIDS, chronic diseases like diabetes, and fatal diseases like cardiac ailments and cancer are also increasingly affecting the people in the emerging economies (Garrett & Alavian, 201 0). Obesity is spreading fast as a symptom of health crisis. It is a significant problem for mostly the developed nations but the developing countries are also catching up. Three of the G20 nations have an obesity rate above 30 percent. These are United States with 46.5 percent of its population suffering form obesity, Argentina with 37.6 percent and Mexico with 35.5 percent. Across the world above one billion adult population are overweight and the obesity rates have risen three fold or beyond in regions of North America, East Europe and the Middle East mainly due to lack of proper nutrients and reduced level of physical activities. Seven amongst the G20 nations have obesity rates above 25 percent. These include Saudi Arabia (29.7 %), ââ¬Å"Australia (28.8 %), Canada (25.6%) and the United Kingdom (25%)â⬠(The Globalist, 2010) Obesity poses great risk for chronic diseases like type two diabetes, cardiovascular disease, strokes, hypertension and sometimes, even cancer. Countries l ike
Friday, August 23, 2019
Modern historial narrative Essay Example | Topics and Well Written Essays - 1250 words
Modern historial narrative - Essay Example This is because Giovanna was well known to be a beggar and professional poisoner. Often, circumstances have lead us to take paths that have landed us in trouble, as this was the case for Giovanna, who started and ended her days at the market place either trying to sell some concoction that could do this or that or begging for food and money for herself and her children after she become a widow. Despite her bad luck in life, Giovanna had a keen interest in business. She noted that the only reason people (mostly women) went to her, was for her portions and magic despite her numerous protests of her incompetence in the practice. She learned to embrace her newly found ââ¬Ëskillââ¬â¢ and even accepted the role in time. News about her spread in Palermo- of course in secret among her clients and potential clients- and soon, she was a typical witch. She had all the classical characteristics of a witch, in that she was old- seventy-five years old to be precise- a widow and a beggar. People even said that she went out at night with ââ¬Å"the women from beyondâ⬠(donna di fura) who were supernatural whose unpredictable decisions and fickle desires, people believed were responsible for their good or evil fortunes. Lavack1 says that more and more women visited Giovanna despite her persistence for incompetence. Moreover, the magic they sought was lethal and had to kill its victim through occult powers, thus leaving whoever understood or mediated those powers morally and legally without blame. In short, those who went to Giovanna went with the intent to murder. Even when her mixtures and spells did not attain their desired effect, the intent to kill remained. Challenged by the pressure from her customers, Giovanna dedicated herself to perfecting her skill in spells and magic potions. Giovanna soon made a casual discovery that changed her
How Successful are Organizations Related to Assisted Suicide in the US Research Paper - 1
How Successful are Organizations Related to Assisted Suicide in the US Attempt to Polarize Public Opinion Through the Use of Language in Their Campaign - Research Paper Example In some European nations such as the Netherlands, euthanasia is accepted in some circumstances. The Dutch government has even discussed how physicians who agree to kill their terminally ill patients can be kept from being held responsible for their deaths. Over the past three decades, ââ¬Å"American law in many states has given its citizens more rights over the events that take place in their own livesâ⬠(Amarasekara and Bagaric, 399). One of these rights is the right to determine when to discontinue medical procedures that will sustain their lives. The difference between euthanasia or mercy killing and the rejection of medical treatment has not been discussed at depth in public forums. Basically, the frequently used expression of the "right to die" mucks the distinction. In addition, the mass medias exposure of individual cases of euthanasia simply serves to distort the difference between public policy and a private act. There exists a distinct difference between ââ¬Å"what a person might feel is practical in a particular case and what would really occur in the offices of physicians and other medical practitioners if euthanasia and assisted suicide became an accepted medical procedureâ⬠(Appel, 2). This topic is of the great significance as the public opinion polls, which always confront this issue by considering whether the members of the public think they will seek this way out if they were struck by a painful terminal illness, usually do not confront the issue of what it would mean if killing was made to be an acceptable practice that can be carried out by medical practitioners without fear of being prosecuted. Assisted suicide takes place when one individual helps another to take his or her own life, either by offering the instrument to commit suicide or by other basic steps. Euthanasia involves direct procedures, like a lethal injection, administered by one individual to end another individuals life.
Thursday, August 22, 2019
Dolphins Essay Example for Free
Dolphins Essay Bottlenose dolphins can grow to be thirteen feet long and weigh up to 600 pounds (Bottlenose Dolphins). This makes bottlenose dolphins the largest of the beaked dolphins (Dolphin Research Center). Bottlenose dolphins have slick and rubbery skin with no sweat glands or hair. Their epidermis is ten to twenty times thicker than that of other mammals. It can be replaced every two hours, which is nine times faster than human skin. The peeling of their skin helps to reduce drag when they swim. The skin is dark gray on their backs, and fades to white or pink on their bellies. This coloring is called countershading. From above the dolphins blend in with the dark water below, and from underneath they blend in with the sunlight. Countershading helps dolphins hide from predators and prey (Bottlenose Dolphins). Bottlenose dolphins are piscivors, or fish-eaters. They have eighty-eight to one hundred small, sharp teeth for grasping slippery squid and fish (Parker and Burton) (Dolphin Research Center). When catching fish, dolphins usually herd a school of fish together and then dash through the school one at a time to feed. It has been observed where 200 bottlenose dolphins were in a single row, working together to find food. Dolphins can also use their tail flukes to toss a fish out of the water and then retrieve the shocked prey (Bottlenose Dolphins). If a dolphin catches a large fish, it will smack the fish on the ocean floor or the waterââ¬â¢s surface to break it into smaller portions (McClintock). After a dolphin catches its prey, it uses its tongue to swallow the fish and push the water out of its mouth (Dolphin Research Center). Dolphins can eat up to thirty pounds of fish in one day, so it is helpful that they have three stomach compartments, similar to that of a cow (McClintock) (Lockley 69). Bottlenose dolphins find fish by using echolocation. This is when a dolphin sends out a beam of short sonar pulses from its melon, or forehead. The beam reflects off of fish or other objects and echoes back to the lower jaw. The echoes are then sent to the ear bones where they are characterized. Using echolocation, dolphins are able to locate prey that is buried up to one and a half feet under the sand (Cahill 140-141). Bottlenose dolphins are excellent swimmers. They can jump up to sixteen feet in the air. Three to seven miles per hour is their normal swimming speed, but they can reach speeds of eighteen to twenty-two miles per hour. Dolphins also porpoise, which is when a dolphin swims fast enough to repetitively come out of the water and back under the water in one swift movement. This uses less effort than swimming fast at the oceanââ¬â¢s surface. When dolphins swim in deep open water, they often dive. They dive to 150 feet regularly, but they have been recorded diving up to 2,000 feet (Bottlenose Dolphins). When a dolphin needs to breathe, it comes to the surface, exhales, and then inhales. If a dolphin stays underwater for a very long time, it can exhale at over 100 miles per hour (Cahill 77). It only takes about 0. 3 seconds for dolphins to breathe (Bottlenose Dolphins). Dolphins exchange 80% of their lung air with each breath; when humans breathe, they exchange only 17% (Bottlenose Dolphins). They come to the surface to breathe every twenty-eight seconds when they are not diving, but they can hold their breath for up to twelve minutes (McClintock) (Bottlenose Dolphins). Before a dolphin can hold its breath for a long time, it has to slow its heart rate down to twelve beats per minute. A slow heart rate helps to conserve energy and oxygen while diving (Dolphin Research Center). In order for dolphins to be able to swim, they have to have fins. Bottlenose dolphins have three different types of fins on their bodies. The most recognizable is the dorsal fin. It is located in the center of the back and is the cause of dolphins sometimes being confused with sharks. The dorsal fin is helpful for balance but is not essential. Dolphins also have flippers on both sides of their bodies called pectoral fins that are used to steer. The bones in pectoral fins look similar to human hands because they have five digits. The two parts of a dolphinââ¬â¢s tail are called flukes. Tail flukes are made up of tough connective tissue with no bones or muscle. The tailââ¬â¢s spread is 20% of the total body length. The dolphinââ¬â¢s back muscles move the flukes up and down to push the dolphin through the water. All of the fins and flippers use the process of countercurrent heat exchange to conserve body heat. This means that the arteries in the fins are surrounded by smaller veins so that some of the heat from the blood is transferred to the blood in the veins instead of being released to the environment (Bottlenose Dolphins). Dolphins need to conserve heat to stay warm in cooler waters. The lifespan of a bottlenose dolphin is twenty to thirty years. They can reproduce every three years for their entire lives starting at the age of six (Bottlenose Dolphins) (Cahill 98). The gestation period lasts twelve months. Baby dolphins, called calves, are usually born tail-first to prevent drowning, and the umbilical cord between the mother dolphin and calf snaps during birth (Cahill 98) (McClintock). ââ¬Å"85% of all firstborn calves dieâ⬠(McClintock). Newborn calves typically weigh twenty-two to forty-four pounds and are thirty-nine to fifty-three inches long (Bottlenose Dolphins). Since dolphins are mammals, calves drink milk produced in the motherââ¬â¢s body (World Book 296). Mother dolphins have to swim constantly with their calves in their ââ¬Å"slipstreamâ⬠because newborns do not have enough blubber to easily float (Hecker). At about four months old, young start to eat fish and are entirely weaned from milk between the ages of one year and eighteen months (Lockley 169). Each dolphin develops a signature whistle at one month old. In order for calves to recognize their mothers by their whistle, mothers whistle to their calves almost constantly for several days after birth (Bottlenose Dolphins). A dolphin will stay with its mother for at least six years and some dolphins stay with their mothers for their entire lives (Bottlenose Dolphins). Bottlenose dolphins are very social animals. They travel in pods, which are groups of two to fifteen dolphins (Bottlenose Dolphins). Dolphins are very protective of each other, and they have killed sharks that were too close to their pod by repeatedly hitting them in the gills (Lockley 172). They will also try to save an injured or dead dolphin by keeping it at the surface for hours or even days (Lockley 19). Bottlenose dolphins are usually very friendly towards humans. Some wild dolphins even go into bays and interact with them (Dolphin Research Center). Dolphins also love to have fun. In captivity, they enjoy teasing each other and humans that are around their tanks (Lockley 48). In the wild, dolphins like to ride ocean waves or a boatââ¬â¢s stern or bow wake (Bottlenose Dolphins). They sometimes toss jellyfish and seaweed to one another and use plastic, seaweed, or other objects as ââ¬Å"dolphin jewelryâ⬠on their fins, beaks, and necks (Cahill 93). Bottlenose dolphins truly are intriguing and individual animals. Itââ¬â¢s hard to believe that some people actually hunt them. Beloved and admired by many, they should be protected in both captivity and the wild. Bottlenose dolphins have been entertaining people in for over eighty years, and hopefully they will continue to do so for many years to come.
Wednesday, August 21, 2019
The Importance Of Gunshot Residue As Evidence
The Importance Of Gunshot Residue As Evidence Gunshot residue is made of particles that form when gasses coming out of a gun hit a surface and instantly cool and condense. The presence or absence of gunshot residue can suggest whether a person fired the weapon or was the victim. There are many tests to show whether or not gunshot residue is present on a surface. The techniques and methods of testing have gotten much more scientifically advanced and more sensitive to minor details. There have also been many experiments to disprove the concerns of gunshot residue testing, such as false positives, transferability, and destruction of evidence. These facts alone disprove many of the arguments that gunshot residue is unreliable and should not be used as a source of evidence. Strengths and Importance of Gunshot Residue as Evidence in Court Cases Firearms are not a rare commodity in the United States, or the world for that matter, and so a basic understanding of what happens when the trigger of a gun is pulled is necessary. Many people know that when the trigger of a weapon is pulled the hammer strikes the back of the bullet casing, which ignites the primer, and creates pressure and heat in the barrel. This pressure buildup is what propels the projectile down the barrel and towards wherever the gun is pointing. The knowledge of what else comes out of the barrel and what happens with it that is not quite as well known. When the primer is struck, the intense heat causes the chemicals in the primer to vaporize and get mixed in with the gasses that are building up. When the projectile is pushed out of the barrel the gasses and the burning and unburned grains of gunpowder travel with the bullet. These gasses hit a surface such as the hands of the shooter, the victim, or surface that is being fired at. The gasses then condensate on the surface, leaving particles that are composed of the chemicals in the primer. This condensation of chemicals is referred to as gunshot residue, or GSR (Wolten Nesbitt, 1980). Gunshot residue has been used for many years as a source of evidence to not only suggest if a person has fired a gun or how far from a surface a gun was fired, but also if a case was a homicide or a suicide. However, there have been disputes over whether or not GSR is a reliable source of evidence. The points brought up in this argument are that gunshot residue tests can have false positives and false negatives, GSR can be transferred from person to person or surface to surface, and that test results can be different and sometimes inconsistent (Wolten Nesbitt, 1980). Over the years the methods of testing for gunshot residue have dramatically improved and become much more scientific. There are much less false positives due to the increased sensitivity of the tests. Research has been done that shows that even though GSR may transfer, investigators can still tell if a person fired a weapon, or just came in contact with it (DiMaio, 1999). There are also many other uses for gunshot residue analysis other then knowing if a person came in contact with a weapon, such as range determination (Saferstein, 2006). The purpose of this paper is to show the strengths and importance of gunshot residue analysis as substantial evidence in criminal court cases. Literature Review In the detection of GSR, DiMaio (1999) states that scanning electron microscope-energy dispersive x-ray spectrometry (SEM-EDX) has a much higher sensitivity because it uses a scanning electron microscope to view questionable GSR particles at a high magnification. Torre, Mattutino, Vasino, and Robino (2004) agree with using SEM-EDX because the technique can distinguish between GSR and brake lining particles. By using an adhesive lifting method the SEM-EDX is even more effective (Nesbitt, Wessel, Jones, 1976). Bird, Agg, Barnett, and Smith (2007) disagree with the use of SEM-EDX. They say that time resolved x-ray fluorescence should be used. On the topic of transferability of gunshot residue, Gialamas, Rhodes, and Sugarman, (1995) states that police officers are very unlikely to transfer GSR to suspects. Vinokurov, Zeichner, Glattstein, Koffman, Levin, and Rosengarten (2001) agree that GSR is not transferred or destroyed very easily with an experiment on the destruction of GSR due to machine washing or brushing. Havekost, Peters, and Koons (1990) state that the investigator also has to look at where the GSR is located on a person to tell if the particles have been transferred or not. Firing distance determination is a common factor in investigations. Saferstein (2006) states that using the Greiss Test method provides a more contrasted view of GSR on a surface. DiMaio (1999) states that using Greiss Test results can help determine whether a case is a homicide or a suicide. Brazeau and Wong (1997) say that using GSR tests can also help determine whether a bullet wound is an entrance or an exit wound. Discussion Detection Methods Gunshot residue detection tests first came to the United States in 1933 in the form of a paraffin test, which was used by covering the hands with paraffin wax and using a color-changing reagent on the wax. Swabs were used instead of wax starting in 1959, but in the 1980s neutron activation and flameless automatic absorption spectrometry (FAAS) were the methods used most commonly. The above methods were effective for the detection of the three main elemental components in GSR, antimony, barium, and lead, but came up with many false positives and negatives (DiMaio, 1999). The occurrence of false negatives and positives is one of the main reasons that gunshot residue is sometimes considered a risky or an unreliable source of evidence. Since the previous tests only tested for the presence of barium, antimony, and lead, any other substance including those elements had the potential to give a false positive result. Defense attorneys could use these false positives as defense tactics to suppress evidence. In the late 1980s a new GSR test, scanning electron microscope-energy dispersive x-ray spectrometry (SEM-EDX), started to be used. SEM-EDX has a much higher sensitivity, because this technique uses a scanning electron microscope to view questionable GSR particles at a high magnification and look at the size and shape of the particles. After particles are found under the microscope, x-ray waves are used to identify the elements on and inside the particles (DiMaio, 1999). Since SEM-EDX allows a person to look at the size and shape of a particle, GSR particles can be distinguished from other environmental or chemical particles that may also appear on the tested surfaces. Being able to differentiate between sources of particles diminishes the false positives to a very few occurrences, if any. This also means that gunshot residue tests and results cannot be as easily disputed in court. The theory of having less false positives has been tested on different occasions to show that using SEM-EDX makes GSR tests more reliable. Research by Torre et al. (2004) shows the results of tests involving particles and residue from the hands of people who work with automobiles. Particles from the brake linings and other moving parts of a car contain barium, lead, and antimony similar to GSR. This experiment proved that SEM-EDX successfully differentiates between gunshot residue and automobile particles using blind tests, which are tests where the person running using the SEM-EDX does not know where the sample came from (Torre et al., 2004). Other tests and experiments included testing to see if SEM-EDX can differentiate between leaded gasoline, which has particles most similar to GSR, and gunshot residue. The experiment was also done in a blind test fashion and was completely successful in further proving the reliability of SEM-EDX (Nesbitt, Wessel, Jones, 1976). Another positive benefit of the scanning electron microscope tests is the methodology of the collection of the samples that was used. Instead of swabbing the hands an adhesive lift is used (Nesbitt et al., 1976). Since an adhesive lift collects the particles in their relative spots it is possible to determine the ratio of particles in a particular surface area. This gives a more accurate distribution and concentration ratio than swabbing a surface and analyzing the number of particles on the swab. The scanning electron microscope-energy dispersive x-ray spectrometry method allows a much longer testing window from the time the gun was fired. With SEM-EDX positive results can be received up to twelve hours after the shooting (DiMaio, 1999). This is because SEM-EDX combines visual inspection of individual particles as well as a mass calculation of the elemental concentrations. There is also another test that can have positive results for as long as thirty-six to forty-eight hours after the gun was fired. This is done with the trace metal detection technique (TMDT), which uses reagents that change colors under a ultraviolet light after they have come in contact with the elements in GSR (DiMaio, 1999). With the newest technological advances, x-ray fluorescence microscopy allows for an even more precise look at GSR particles. This method uses the excited state of particles due to x-rays and investigators observe these particles underneath high powered microscopes. The particles fluoresce and appear brighter then the surface (Bird, Agg, Barnett, Smith, 2007). The fluorescing particles make the visualization of GSR particles much easier and allows for a more specific determination of the spread of the residue. Destruction and Transferability Some people may say that allowing more time to pass between the firing of the weapon and when the sample is collected is a detrimental thing. The extra time allows people to wash their clothes or hands or try to at least wipe them off. This is another point argued by people who say GSR is unreliable. There is always the possibility that a suspect can wash their hands and clothes after firing a weapon. The fear is that once that has been done that there will no longer be particles left to detect. In the experiment published by Vinokurov, et al. (2001), tests were done on clothes that had been machine-washed and other tests on clothes that had been brushed with another piece of material. The tests showed that even though a majority of the GSR particles had been removed, there were still enough particles in some circumstances to get a positive GSR detection (Vinokurov, et al., 2001). The results of this experiment proved that even though investigators may allow more time before testing, there are still chances that investigators can get results even after evidence is washed. Besides washing clothes and hands, there is also the possibility that GSR particles can be transferred to another person or surface by direct contact, or if a person is within a close distance when a gun is fired. Gunshot residue is easily rubbed off or transferred to someone else, which sometimes can make deciding what really happened difficult, but not impossible. Even though gunshot residue can be found on a person who did not fire a weapon, there will be certain circumstances in order to prove they didnt fire the gun. A person standing within a close range can have GSR on them. Although a person will test positive the location of the GSR and the concentrations will be different then if that person pulled the trigger and fired the weapon. For instance, if a person puts their hand out in self-defense of a shooter, there will be residue found on the palm of the hand in but very little if any on the back of the hand. If the person fired the gun, there would be a high concentration on the back of the hand (Havekost, Peters, Koons, 1990). Another situation is one that has been argued by defense attorneys. Defense attorneys say that the GSR that was found on the suspect could have been transferred from the hands of the police officer that arrested them. In theory, this may sound possible, but most officers do not even touch their gun on a daily basis, let alone fire it. In a study published by Gialamas, Rhodes, and Sugarman (1995) police officers that had not fired their weapon over a certain period of time were tested for gunshot residue. Forty-three officers were tested, and out of those officers twenty-five showed absolutely no particles that even resembled GSR. Seventeen officers were found to have particles similar to GSR, but were only environmental contaminates, and three officers were found to have only one particle of GSR (Gialamas, et al., 1995). Even though a couple of the officers showed a particle of GSR, there would have to be a much higher concentration of particles in order to conclude that that officer had fired a weapon. Even if the officer had a GSR particle on them, although possible, the likelihood of touch transfer is extremely small. Even if that particle did transfer when an officer touched a person, the particle would be in a place inconsistent with firing a weapon, such as the shoulder, wrists, back of the neck, etc. Range Determination Gunshot residue analysis can be used for purposes other then determining if a suspect was holding the gun that was fired. Gunshot residue can be used to determine how far away from an object the gun was when it was fired. This is done by the GSR pattern left on the surface of the target. There are other tests that can be used to better develop and lift the residue pattern from a surface. One of the methods that can be used is the Greiss test. This test involves the use of a chemically treated gelatin-coated photograph paper. The paper transfers the residue pattern by reacting with the nitrates in the gunshot residue. After the pattern is transferred off of the target surface test fires are done to match the spread and distribution of the GSR and determine the relative distance of the shooter (Saferstein, 2006). Since the Greiss test can be used on clothes and other target surfaces this technique suggests that gunshot residue is very valuable in the determination of distance. Using gunshot residue as a distance determination can also help determine whether the case is a homicide or a suicide (DiMaio, 1999). Sometimes a homicide can be staged to look like a suicide, usually by placing the gun in the hands of the victim. By using the gunshot residue pattern to determine the distance of the weapon when fired, investigators can tell whether or not the victim was holding the weapon. It is only physically possible for a human to hold a gun aimed at themself a certain distance away from their own body and still be able to pull the trigger. This distance is directly related to the victims arm length. If the range determination suggests that the distance between the victim and the gun was much greater then the victims arm length, then the crime was more then likely a homicide (DiMaio, 1999). Gunshot residue cannot only be used to determine the distance that the weapon was fired at, but also whether or not a wound is an entrance or an exit wound. A GSR test can be done on the edges of the wound to see if there is residue present (Brazeau Wong, 1997). A medical examiner may need help determining whether a wound is an entrance or an exit in a couple of circumstances. Sometimes a bullet can ricochet off of another object before hitting the target, which can cause the wound to look different then the normal entrance wound. The medical examiner may also consider using a GSR test to determine if the wound is an entrance wound if the projectile has entered and exited the body multiple times due to the way the victims body is positioned (Brazeau Wong, 1997). Another use of gunshot residue tests around the edges of a wound is to see if the wound is in fact from a firearm. Sometimes wounds can look like a gunshot wound but are actually from other sources. One of the sources for wounds that can appear as a gunshot wound is the hole some insects will make while they are feasting and laying eggs on a dead body (Brazeau Wong, 1997). Conclusion There are no two cases that are completely and indisputably the same. That is why each case has to be looked at individually. Investigators need to take the time to evaluate the results of any tests and evidence, including gunshot residue. Just because a gunshot residue comes back negative does not necessarily mean there was never and residue there. The same goes for a positive result, just because a test comes back positive does not necessarily mean that that person was the shooter or held the gun. With all the new advances in the technology used to test for gunshot residue the downfalls and errors previously associated with GSR have almost completely been eliminated. Many different studies and experiments have disproved many of the concerns over the tests and results of GSR tests being unreliable. GSR also has many other helpful uses in solving cases such as distance determination and the difference between a homicide and a suicide. While gunshot residue may have been made out to have many downfalls and disadvantages, and while it is not absolutely accurate, nothing in the scientific world is absolutely error proof, and therefore GSR is extremely helpful and is reliable enough to be considered substantial evidence not only in cases but also in court.
Tuesday, August 20, 2019
Hydrological Impacts of Wimbleball Reservoir
Hydrological Impacts of Wimbleball Reservoir An Evaluation of the hydrological impacts of Wimbleball Reservoir using the IHA approach A river acts both as a source and carrier of water for supporting and sustaining the biological diversity and integrity of the aquatic, wetland and riparian species and natural ecosystems. To accomplish these functions, it is necessary that river water meets some essential qualitative and quantitative parameters and the stream-flow exhibits the dynamics and hydrological attributes comparable to that of natural or unaltered river flows (hydrologic regime). This hydrologic regime is the lifeline of freshwater ecosystem and all diverse variety of aquatic riparian species are for long accustomed and adapted to the characteristic temporal, spatial and hydrologic variation of water flow cycles attributable to the natural or unaltered water flow. Unfortunately, this regime and its naturally configured variation patterns get disturbed failing to absorb the stresses induced by our ever-increasing demands and environmentally irresponsive use of water. To evaluate the shifts in the pre and post-reservoir hydrologic parameters, the effect of Wimbleball Reservoir have been analysed based on the long-term flow-patterns of the downstream discharge of the reservoir. The analysis was conducted by a very robust statistical model called the IHA model. Both long term differences and RVA analysis show substantial impacts of manmade reservoir control on the biota of the Exe-catchment. Introduction Water bodies like rivers, streams, channels, etc. serve a dual function being essential source points for our day-to-day water requirements as well being its transporters or carriers by flowing in and channelling water downstream to the river beds, catchments and agricultural fields in the process supporting and sustaining the biological diversity and integrity of the aquatic, wetland and riparian species and natural ecosystems. Our earth is also called the ââ¬Ëwater planetââ¬â¢ as water forms approximately 70% of its total surface (The Ground Water Foundation,2003) but only a part of it is available for our use. This realization has long back prompted us to take up some water management practices. In the beginning, water management practices were very much focused on issues like water quality and flood control measures and the overall strategy was never so broad to include other aspects like water quantity, stream flow management and restoration (BD, Richter, etal,1997)2. However, issues pertaining to water quantity, flow, restoration, etc. gradually started to get prominence in our policy framework following a landmark order passed by the US Supreme Court identifying the separation of water quality from water quantity and flow as an artificial distinction and recommending incorporation of both water quality and quantity objectives in a broader and comprehensive water management policy framework (US-EPA, 2002)3. Water quality, quantity flow conditions are in way inseparable features considering the fact that the amount of flow in a river effects many issues of water quality and water quantity at the same time. Therefore, the assessment on the wholesomeness of water in any system is essentially dictated by the above conditions of quality, quantity and flow characteristics. Going by this approach broadens the overall water policy framework making this a comprehensive management initiative. This shift in water management approach necessitated re-configuration of the erstwhile single or limited objective driven practice of flood storm water control thereby embracing a comprehensive initiative of total ecosystem management restoration having multi-utility potentials. This system is very important and effective because this takes into account the sustainable use of water resources or ââ¬Ëwater takingsââ¬â¢ and their possible restoration (Dept. of Fisheries Oceans, Canada, 2002)4. Under the ambit of this, it is necessary that river water meets some essential qualitative and quantitative parameters and the stream-flow exhibits the dynamics and hydrological attributes (hydrologic regime) comparable to that of natural or unaltered river flow (Richter D. Brian etal) 5. This hydrologic regime or ââ¬Ënatural flow regimeââ¬â¢ is the lifeline of freshwater ecosystem and all diverse variety of aquatic riparian species are for long accustomed and adapted to the characteristic temporal, spatial and hydrologic variations of water flow cycles attributable to the natural or unaltered water flow. Unfortunately, this regime and its naturally configured variation patterns get disturbed (Allan David HinzLeon, SNRE, 2004)6 failing to absorb the stresses induced by our ever-increasing water takings demands and environmentally irresponsive use of water. In fact, this is the point where human intervention or controls and water integrity issues found themselves in a highly confronting and conflicting platform. Increased water demands compelling human actions like construction of water reservoirs, dams, impoundments, etc. for storing and using water for domestic, energy and hydropower, artificial parks and various other uses have started taking their toll on river waters and water bodies substantially degrading the quality, quantity and importantly squeezing the downstream water flows(Benue, A. C. 1990). This flow reduction in rivers consequential to manmade flood and irrigation control practices like reservoirs and dams are found to alter the natural hydrologic regime bringing in a series of impairments to overall ecosystem and also opening up a new front in the field of river and hydrology studies. This paper aims to assess the variations in the hydrological parameters of a river system specifically attributable to impacts of man-made interventions or controls like reservoirs. Primarily, the research ambition is to identify and evaluate the degree of alterations in the hydrologic profile by analysing the long-term historical as well recent water flow records representative of the pre-impact and post-impact period of construction and commissioning of a typical reservoir. An emerging computer tool called the ââ¬ËIHAââ¬â¢ (Indicators of Hydrologic Alterations) has been applied to generate scenarios and analyze the data. The records and data needs for this study have been sourced from an existing gauging station in the Exe river of South-West England strategically selected to represent the influence of the ââ¬ËWimbleball Reservoirââ¬â¢. Natural Flow Regime Hydrologic Alterations ââ¬â Ecological Significance The concept of natural flow regime is based on the understanding that aquatic and riparian organisms depend upon, or can tolerate arrange of flow conditions specific to each species (Puff etal, 1997)7.For example, certain fish species moves into safer floodplain areas during floods to feed and escape from attacks of other species occupying the main water body thereby adapting a mechanism to survive and carry on all by itself. This in a way indicates that if flooding occurs at the right time of the year, and lasts for the right amount of time, these fish populations will benefit from the flood event finally. Again as a contrast to this case, other species may be adversely affected by the same flood. With the development of the science of hydrology, it has been confirmed with a good degree of confidence that hydrologic regime with all its natural and temporal variations (both intra-annual and inter-annual) are needed to maintain and restore the natural form and function of aquatic ecosystems. However, this prerequisite is not in line with the traditional water management practice which is functionally attuned to influence and dampen natural fluctuations with the objective to provide steady and undisturbed supply of water for different in-stream and out-of-stream activities(Richter et al., 2003) . Moreover, for intervening and containing extreme drought and flood events, the traditional water management initiatives rather relied on moderating and limiting flow fluctuations. Many studies indicate ââ¬Ënatural flow regimeââ¬â¢ as a determinant toing-stream flow needs of a water body. For example, (Richter et al,1996) and (Puff et al. 1997) generalized that natural flow conditions may indicate and determine in-stream flow requirements. There exists a correlation between stream-flow and other physicochemical characteristics critical to ecological integrity of streams and rivers(Puff etal., 1997). Precisely, flow can be associated to some direct as well indirect or secondary impacts and as such flow characteristics can be used as surrogates for other in-stream indicators and ecosystem conditions and importantly the components of a flow regime as shown infigure-1, are very much accessible to scientific inquiry (IFC, 2002,Poff et al. 1997, Richter et al., 1996) . Any disruption, fragmentation and dilution of this natural regime of water-flow leads to ââ¬ËHydrological alterationââ¬â¢ and in general, this can be defined as any anthropogenic disruption in the magnitude or timing of natural river flows (Biosciences, 50-9, 2000). The natural flow regime of a river is dependent on various factors including rainfall, temperature and evaporation when considered in a broader geographic scale or macro-scale and is also influenced by the physical characteristics of a catchment at the catchment level or micro-scale(Rash et al, 1988) . As mentioned earlier, river flow regimes are also affected directly and indirectly by human activities. Such human interventions disrupting natural flow of a river through construction and operation of reservoirs and dams have the potential of triggering a series of undesirable consequences like extensive ecological degradation, loss of biological diversity, water quality deterioration, groundwater depletion, and also more frequent and intense flooding(Puff et al, 1997). Reservoir are built to store water to compensate for fluctuations in river flow, thereby providing a measure of human control of water resources, or to raise the level of water upstream to either increase hydraulic head or enable diversion of water into canal. The creation of storage and head allows reservoirs to generate electricity, to supply water for agriculture, industries, and municipalities, to mitigate flooding and to assist river navigation(Rash et al. 1988). The biological effects of hydrologic alterations are often difficult to disentangle from those of other environmental perturbations in heavily developed catchments as identified by Rosenberg et al. (Environmental Reviews 5: 27ââ¬â54, 1997) . The impacts of large-scale hydrological alteration include habitat fragmentation within rivers (Dynes us and Nilsson 1994) , downstream habitat changes, such as loss of floodplains, riparian zones, and adjacent wetlands and deterioration and loss of river deltas and ocean estuaries (Rosenberg et al. 1997)36,deterioration of irrigated terrestrial environments and associated surface waters (McCall 1996) . Hydrological alterations also bring another indirect or secondary impacts on the genetic, ecosystem and global levels. They can cause genetic isolation through habitat fragmentation (Pringle 1997) , changes in processes such as nutrient cycling and primary productivity (Pringle 1997, Rosenberg et al. 1997),etc. With the realization of the importance of natural flow regime and the possible dangers posed by human alterations, there emerged a relatively new and promising water and ecology management paradigm. Many researchers started seeing this as a very comprehensive and sound management option and on many occasions stressed regarding the urgency of protecting or restoring natural hydrologic regimes (Sparks 1992;National Research Council, Doppler et al. 1993; and Dynes us Nilsson 1994) . Effective ecosystem management of aquatic, riparian, and wetland system requires that existing hydrologic regimes be characterized using biologically-relevant hydrologic parameters, and that the degree to which human-altered regimes differ from natural or preferred conditions be related to the status and trends of the biota(BD, Richter, etal, 1997). Ecosystem management efforts should be considered experiments, testing the need to maintain or restore natural hydrologic regime characteristics in order to sustain ecosystem integrity. Only some limited studies have closely examined hydrologic influences on ecosystem integrity and this is mainly because most of the commonly used statistical tools are poorly suited for characterizing hydrologic data into biologically relevant attributes(BD, Richter, etal, 1997). Without such knowledge, ecosystem managers will not be compelled to protect or restore natural hydrologic regime characteristics. However, recently, there have been some significant developments in the field of hydrological studies and importantly few robust computer statistical tools and models like IHA Range of Variability Approach (RVA) using the (Indicators of Hydrologic Alterations, BD, Richter, etal, 1997), Wetted Physical Habitat Simulation System (PHABSIM Model, Jowett, 1997)35, Flow Incremental Methodology (FIM), other Hydrologic Modelling Software like GAWSER, Ontario Flow Assessment Techniques (OFAT), etc. are now known to exist(Jowett, 1997). The following sections attempt to evaluate and assess the possible effects of hydrological alteration specifically induced by human interventions or activities. A very useful computer model called the model (available at Freshwaters.com) has been used for generating and evaluating the effects of flow variations. The ecological zone considered for analysis in this paper is the ââ¬ËExe river Estuaryââ¬â¢ region and the gauging station selected is 45001 Exe at Thorverton. The Indicators of Hydrologic Alteration (IHA) Method ââ¬â Approaches Application The evaluation and assessment of the flow regime of the Exe-river system and the variations it witnessed after the construction of the ââ¬ËWimbleball Reservoirââ¬â¢ have been accomplished by the application of Avery detailed computer-modelling tool known as the IHA or ââ¬ËIndicators of Hydrologic Assessmentââ¬â¢ model. The software basically takes birth from the concept of integrity and wholesomeness of the ââ¬Ënatural flow regime ââ¬Ëand is configured and capable of determining the relative transformations and variations in this natural flow regime subject to any natural or artificial modifications or alterations (BD, Richter, etal, 1997). At first, it requires defining and identifying a series of biologically-relevant hydrologic attributes that characterize intra anointer-annual variations in water conditions which are further processed for a robust statistical variation analysis after isolating the data-sets to represent two different periods resembling the pre-impact and post-impact scenarios (Rosenberg, et al, 2002). The Nature Conservancy is now the custodian of this statistical tool, which is very useful for assessing the degree to which human activities have changed flow regimes (US-EPA, 2002). Brian D. Richter and et al. from the Nature Conservancy (Richter D. Brian, etal, 1996-97) have identified four basic for this analysis and they are: (I) Define the data series (e.g., stream-gauge or well records) for pre- and post-impact periods in the ecosystem of interest. (ii) Calculate values of hydrologic attributes Values for each of 32ecologically-relevant hydrologic attributes are calculated for each year in each data series, i.e., one set of values for the pre-impact data series and one for the post-impact data series. (iii) Compute inter-annual statistics Compute measures of central tendency and dispersion for the 32 attributes in each data series, based on the values calculated in step 2. This produces a total of 64 inter-annual statistics for each data series (32 measures of central tendency and 32 measures of dispersion). (iv) Calculate values of the Indicators of Hydrologic Alteration -Compare the 64 inter-annual statistics between the pre- and post-impact data series, and present each result as a percentage deviation of onetime period (the post-impact condition) relative to the other (there-impact condition). The method equally can be used to compare the state of one system to itself over time (e.g., pre- versus post-impacts just described); or it can be used to compare the state of one system to another (e.g., an altered system to a reference system), or to compare current conditions to simulated results based on models of future modification to a system. The same computational strategies will work with any regular-interval hydrologic data, such as monthly means; however, the sensitivity of the IHA method for detecting hydrologic alteration is increasingly compromised with time intervals longer than a day (Richter. Brian, etal, 1996-97). Detection of certain types of hydrologic impacts, such as the rapid flow fluctuations associated with hydropower generation at dams, may require even shorter (hourly) interval. They have also suggested that ââ¬Ëthe basic data for estimating all attribute values may preferably be daily mean water conditions (levels, heads, flow rates). Hydrologic conditions in general can vary in four dimensions within an ecosystem (three spatial dimensions and time).However, the three spatial domains can be scaled down to one with the assumption that only one spatial domain exists at any strategic location over time in a river system. Restricting the domain to one specific point within a hydrologic system (like any measuring point in river) makes it simple for us to identify specific water conditions with one spatial and one temporal domain. These events may be specific water conditions like heads, levels, rate of change, etc. (Richter Brian, etal, 1996) whose temporal variations can be recorded and assessed from that particular spatial point or from a single position. Such temporal changes in water conditions are commonly portrayed as plots of water condition against time, or hydrographs. Here, we seek to study and analyse the variations in hydrologic conditions using indicators and attributes, which should essentially be biologically relevant as well as responsive to human influences or modifications like reservoir and dam operations, ground water pumping, agricultural activities, etc. at the same time (Richter D. Brian, etal,1996,). Importantly, a variety of features or parameters of hydrologic regime can be used and functionally superimposed (Sense South wood 1977, 1988; Puff Ward 1990}40 to virtually represent and finally characterize the physical habitat templates (Townsend Hilde, 1994)43 or environmental filters (Sense Eddy 1992)42that shape the biotic composition of aquatic, wetland, and riparian ecosystems. The IHA method is based on 32 biologically relevant hydrologic attributes, which are divided into five major groups to statistically characterize intra-annual hydrologic variation as showman Table-1. These 32 attributes are based upon the following five fundamental characteristics of hydrologic regimes: 1. the magnitude of the water condition at any given time is measure of the availability or suitability of habitat, and defines such habitat attributes as wetted area or habitat volume, or the position of water table relative to wetland or riparian plant rooting zones; 2. the timing of occurrence of particular water conditions can determine whether certain life cycle requirements are met, or influence the degree of stress or mortality associated with extreme water conditions such as floods or droughts; 3. the frequency of occurrence of specific water conditions such as droughts or floods may be tied to reproduction or mortality events for various species, thereby influencing population dynamics; 4. the duration of time over which a specific water condition exists may determine whether a particular life cycle phase can be completed, or the degree to which stressful effects such as inundation or desiccation can accumulate; 5. the rate of change in water conditions may be tied to the stranding of certain organisms along the waters edge or in pounded depressions, or the ability of plant roots to maintain contact with phreatic water supplies. A detailed representation of the hydrologic regime can be obtained from these 32 parameters for the purpose of assessing hydrologic alteration. Importantly, all the parameters having good ecological relevance do not call for any parameter specific statistical analysis and all of them can be processed by single and unique approach like they (Kozlowski 1984; Bustard 1984; Puff Ward 1989)46. Also, because certain stream-flow levels shape physical habitat conditions within river channels, it is needed to identify some hydrologic characteristics that might aid in detection of physical habitat alterations. (Richter D. Brian, etal, 1997). Sixteen of the hydrologic parameters focus on the magnitude, duration, timing, and frequency of extreme events, because of the pervasive influence of extreme forces in ecosystems (Gaines Denny 1994)48 and geomorphology (Leopold1994)49 and other 16 parameters measure the central tendency of either the magnitude or rate of change of water conditions (Table-2). The rationale underlying the five major groupings and the specific parameters included within each are described below. Table-2: Summary of various Hydrological Groups Groups Descriptions Number of total Hydrologic Parameters 1 Magnitude of monthly water conditions 12 2 Magnitude duration of annual extremes 10 3 Timing of annual extremes 02 4 Frequency duration of high low pulses 04 5 Rate frequency of change in conditions 04 Group-1: Magnitude of Monthly Water Conditions This group includes 12 parameters, each of which measures the central tendency (mean) of the daily water conditions for a given month. The monthly mean of the daily water conditions describes ââ¬Å"normal daily conditions for the month, and thus provides a general measure of habitat availability or suitability. The similarity of monthly means within a year reflects conditions of relative hydrologic constancy, whereas inter-annual variation (e.g., coefficient of variation) in the mean water condition of a given Month provides an expression of environmental contingency (Colwell 1974; Puff Ward1989). The terms constancy and contingency as used here refer tithe degree to which monthly means vary from month to month (constancy),and the extent to which flows vary within any given month(contingency). Group-2: Magnitude and Duration of Annual Extreme Water Conditions The 10 parameters in this group measure the magnitude of extreme(minimum and maximum) annual water conditions of various duration, ranging from daily to seasonal. The durations that we use follow natural or human-imposed cycles, and include the 1-day, 3-day, 7-day(weekly), 30-day (monthly), and 90-day (seasonal) extremes. For any given year, the 1-day maximum (or minimum) is represented by the highest (or lowest) single daily value occurring during the year; thematic-day maximum (or minimum) is represented by the highest (or lowest) multi-day average value occurring during the year. The mean magnitudes of high and low water extremes of various duration provide measures of environmental stress and disturbance during the year; conversely, such extremes may be necessary precursors or triggers for reproduction of certain species. The inter-annual variation (e.g. Coefficient of variation) in the magnitudes of these extremes provides another expression of contingency. Group-3: Timing of Annual Extreme Water Conditions This group includes 02 parameters one measuring the Julian date of the 1-day annual minimum water condition, and the other measuring the Julian date of the 1-day maximum water condition. The timing of the highest and lowest water conditions within annual cycles provides another measure of environmental disturbance or stress by describing the seasonal nature of these stresses. Key life cycle phases (e.g. Reproduction) may be intimately linked to the timing of annual extremes, and thus human induced changes in timing may cause reproductive failure, stress, or mortality. The inter-annual variation in timing of extreme events reflects environmental contingency. Group-4: Frequency and Duration of High and Low Pulses This group has 04 parameters include two, which measure the number of annual occurrences during which the magnitude of the water condition exceeds an upper threshold or remains below a lower threshold, respectively, and two, which measure the mean duration of such high and low pulses. These measures of frequency and duration of high- and low-water conditions together portray the pulsing behaviour of environmental variation within a year, and provide measures of the shape of these environmental pulses. Hydrologic pulses are defined here as those periods within a year in which the daily mean water condition either rises above the 75th percentile (high pulse) or drops below the25th percentile (low pulse) of all daily values for the pre-impact time period. Group-5: Rate and Frequency of Change in Water Conditions The four parameters included in this group measure the number and mean rate of both positive and negative changes in water conditions from one day to the next. The Rates and frequency of change in water conditions can be described in terms of the abruptness and number of intra-annual cycles of environmental variation, and provide a measure of the rate and frequency of intra-annual environmental change. Assessing Hydrologic Alteration In assessing the impact of a perturbation on the hydrologic regime, we want to determine whether the state of the perturbed system differs significantly from what it would have been in the absence of the perturbation. In particular, we want to test whether the central tendency or degree of inter-annual variation of an attribute of interest has been altered by the perturbation (Stewart-Oaten et al.1986)55. The assessment of impacts to natural systems often poses difficult statistical problems, however, because the perturbation of interest cannot be replicated or randomly assigned to experimental units (Carpenter 1989; Carpenter et al. 1989; Hulbert 1984;Stewart-Oaten et al. 1986)66. The lack of replication does not hinder estimation of the magnitude of an effect, but limits inferences regarding its causes. However, the IHA method is robust and can be easily adapted to more sophisticated experimental designs. A standard statistical comparison of the 32 IHA parameters between two data series would include tests of the null hypothesis that the central tendency or dispersion of each has not changed. However, this null hypothesis is generally far less interesting in impact assessments than questions about the sizes of detectable changes and their potential biological importance. A standardized process for assessing hydrologic impacts is included within the IHA software. The Range of Variability Method (RVA) is another analysis frame in which to assess change in structured manner. This method of determining hydrologic alteration is based on the theory that there is natural variability in stream-flow. The RVA software would plot and determine whether an activity, such as water taking, would alter the stream-low outside this normal variability. Significant alteration would occur if the stream-low regime were altered more than one standard deviation from the natural variability, which may have ecological consequences. Development of Pre- and Post-Impact scenarios When adequate hydrologic records are available for both there-impact and post-impact time periods, application of the IHA method will be relatively straightforward using the statistical procedures described above. When pre- or post-impact records are nonexistent,include data gaps, or are inadequate in length, however, various datareconstruction or estimation procedures will need to be employed. Examples of such procedures include the hydrologic record extension techniques described by Searcy (1960) and Alley Burns (1983).Hydrologic simulation modelling or water budgeting techniques can also be used to synthesize hydrologic records for comparison using the IHAmethod (Linsley et al. 1982)73. Accounting for Climatic Differences Climatic differences between the pre- and post-impact time periods obviously have the potential to substantially influence the outcome of the IHA analysis. Various statistical techniques can be used to test for climatic differences in the hydrologic data to be compared. When the IHA analysis is to be based upon actual hydrologic measurements rather than estimates produced from models, a reference site or set of sites uninfluenced by the human alterations being examined can be used as climatic controls (Alley Burns 1983). For example, stream-gauge may exist upstream of a reservoir thought to have impacted study site. Analyses can establish a statistical relationship between stream-lows at the study site and at the upstream reference site using synchronous pre-dam data sets for the two sites. This relationship can then be used to estimate the stream-low conditions that would have occurred at the study site during the post-impact time period in the absence of the reservoir. IHA Application- Description of Study Site As mentioned earlier, the principal motive of this study is tantalize and evaluate the impacts, if any, of human interventions like reservoir operations on the overall sanctity and natural integrity, i.e. the natural hydrologic regime of water bodies like rivers. Here the operation of a well know reservoir in the south-west coast of Britain called the ââ¬ËWimbleball reservoirââ¬â¢ has been identified as the human intervention point which is sufficiently used to store and supply water to cater to human needs like hydropower, drinking water supply, etc. (SW-Environment Agency, 2003)81 and eventually it ends up regulating a river system in the process. The down-stream water body and habitat, which is expected to come under the influence of the alterations resulting from the Wimbleball reservoir operations, considered here is the Exe-river estuary system. The main motivation for selection of the above reservoir and the river system happens to bathe strategically located river monitoring system (gauge-station),which falls in the influence zone. This station is designated asââ¬ËNo.45001-Exe at Thorvertonââ¬â¢ having a grid reference of ââ¬Ë21 (SS) 936016ââ¬â¢ (NRFA Data Holdings, 2005)66. Figure-2 (enclosed) shows diagrammatic representation of the Exe-river catchments area along with the positions of the river and reservoir. The national authority NRFA, describes the monitoring station as ââ¬Å"Velocity-area station with cableway and flat V-Crump profile weir constructed in 1973 due to unstable bed conditionâ⬠(NRFA, 2005)66. There also exists minor culvert flow through mill u/s of station included in rating. Notably, Low flows are affected significantly by the operations of the Wimbleballreservoir post-1979 and by exports to the Taw catchment. Station iscontrol point for operational releases from Wimbleball (NRFA DataHoldings, 2005)66. The headwaters drain Exmoor and the geology is predominantly Devonian sandstones and Carboniferous Culm Measures, with subordinate Permian sandstones in the east, Moorland, forestry and arrange of agriculture (NRFA Data Holdings, 2005)66. The Exe Estuary is partially an enclosed tidal area composed of both aquatic (marine, brackish and freshwater) and terrestrial habitats. The Estuary makes an important contribution to the diversity of British estuaries by virtue of its unspoilt nature, international conservation importance, recreational opportunities and high landscape value(SW-Environment Agency, 2003) . This Estuary flows through an open landscape with gently rolling hills on either side. It is shallower than many estuaries in the south west of England, so the tide plays significant role, wit
Monday, August 19, 2019
B E C: The New Phase Of Matter :: essays research papers
B E C: The New Phase of Matter A new phase of matter has been discovered seventy years after Albert Einstein predicted it's existence. In this new state of matter, atoms do not move around like they would in an ordinary gas condensate. The atoms move in lock step with one another and have identical quantum properties. This will make it easier for physicists to research the mysterious properties of quantum mechanics. It was named "Molecule of the Year" because it was such a major discovery, but it is not a molecule at all. The phase, called the Bose-Einstein condensate (BEC) follows the laws of quantum physics. In early 1995, scientists at the National Institute of Standards and Technology and the University of Colorado were the first to uncover the BEC. They magnetically trapped rubidium atoms and then supercooled the atoms to almost absolute zero. The graphic on the cover shows the Bose-Einstien condensation, where the atom's velocities peak at close to zero velocity, and the atoms slowly emerge from the condensate. The atoms were slowed to the low velocity by using laser beams. The hardware needed to create the BEC is a bargain at $50,000 to $1000,000 which makes it accessible to physics labs around the world. The next step is to test the new phase of matter. We do not know yet if it absorbs, reflects,or refracts light. BEC is related to superconductivity and may unlock some mysteries of why some minerals are able to conduct electricity without resistance. The asymmetrical pattern of BEC is is thought by some astrophysicists to explain the bumpy distribution of matter in the early universe, a distribution that eventually led to the formation of galaxies. Physicists are working on creating an atom laser, using new technology derived from the BEC. The new lasers would be able to create etchings finer than those that etch silicon chips today.
Sunday, August 18, 2019
Getting Past Rejection :: essays research papers fc
Getting Past Rejection We hear about love all around us, in music and movies, on TV, in stories. If you look in the dictionary, they define love as a tender, warm feeling; warm liking; affection; attachment. Love is simply a choice we make when we find someone who makes us happy, and who we trust with our innermost thoughts and feelings. We hear that love will make us happy. We hear that single people are lonely. We are told that if we are not part of a couple, we are not complete. We all want to be part of this thing called ââ¬Ëloveââ¬â¢. Okay, we get a boyfriend or girlfriend, now everything should be perfect. But, itââ¬â¢s not perfect, because life never is. It is easy to become disappointed. Feelings can change. One person may decide to say good-bye. When that happens, the one left behind will feel rejected. Rejection means someone choosing between one thing and another. The one who doesnââ¬â¢t get chosen is rejected. This person who feels rejected thinks as if they are not good enough. It hurts. When the person you love decides to leave you, it is even more painful. Does rejection mean failure? No. The end of a relationship means that the boyfriend or girlfriend decided that s/he wanted a change in the path of their lives. The reasons for this are within the ex - not within the rejected person. No one is a less valuable person because their boyfriend or girlfriendââ¬â¢s feelings have changed. The bad thing about getting dumped or abandoned is it costs us our self-esteem. We feel a full tidal wave of rejection bring us to our knees, sucking the wind out of our sails. We form an inner-hate and get caught in a self-destructive mode. We create within ourselves intense feelings of rejection, isolation, and a profound loss of love, acceptance, and control. When we are dumped it creates a grief that is far more intense than the loss of love through death. With death the person who has died has not consciously elected to withdraw their love for you. You get a sense of closure and finalization. Death has no possibilities of changing its mind! But when we are dumped the person has made the decision to withdraw from you and desert you. They have rejected you, turned their back to you, and, often times, moved on to someone else.
North Ireland Conflict :: essays research papers
Political Unrest in Ireland à à à à à There has been a continuing conflict in Ireland that has been going on for decades, and affects the world to this day. It is essentially a political and religious struggle between several groups. The British have played a key role in the situation since the early 1900ââ¬â¢s, and even more distant into the past. Origins of the Conflict à à à à à The conflict in Ireland has its roots as far back as the 1500ââ¬â¢s. Ireland has historically been recognized as a Catholic country. However, when King Henry VIII was ruling in Britain, Ireland was brought under British control. At the time, Britain was predominantly a Protestant country. Tension between the Catholic majority and Protestant minority began to arise in the two faiths. Throughout the years the British and Protestants began to tighten their grip and control in Ireland. In 1534 Henry VIII had the Ireland parliament declare himself as King of Ireland. The native Irish viewed the British as a major threat to their customs. There have been multiple uprisings and rebellions by the Irish people against the British. A British and Spanish alliance was able to put to rest all of the major uprisings. à à à à à The English began to settle areas of Ireland with Protestants, beginning in the early 1600ââ¬â¢s. The northern regions of Ireland became one of the more heavily immigrated areas. The all-island Kingdom of Ireland (1541-1801) was incorporated into the United Kingdom in 1801 under the terms of the Act of Union, under which the kingdoms of Ireland and Great Britain merged under a central parliament, government and monarchy based in London. In the early 20th century Unionists, led by Sir Edward Carson, opposed the introduction of Home Rule in Ireland. Unionists were in a minority on the island of Ireland as a whole, but formed a majority in the northern province of Ulster (en.wikipedia.org/Northern_Ireland). Involved Groups and Peoples à à à à à à à à à à The two major groups involved are the Protestants and the Catholics. The Protestants have their roots back to the British who migrated to the region when King Henry VIII was in power. The Protestants are predominantly Unionists. Unionists are ââ¬Å"people in Ireland, Scotland, and Wales who were historically in favor of uniting their nations into a United Kingdom, or who in modern times with their nation to remain a part of the United Kingdom (www.wikipedia.com). The Protestants are the majority inhabitants of Northern Ireland today. The Catholics are predominantly known as Nationalists, and are descendants of the Irish population predating the settlement of the English and Scottish.
Saturday, August 17, 2019
High School vs. College Essay
I think a good education is an important part of oneââ¬â¢s life. To achieve a good education, one should attend both High School and College. The transition from High School to College is a step that a student will either adjust to or struggle with. Although, some people think High School has a lot in common with college, I find they have a few differences. There are also certain similarities as well, by which, one wonââ¬â¢t feel as if College is a new world. The more prepared a person is to face the differences and similarities, the more successful they might be. High School and College are both educational grounds for a student to grow with knowledge. A student graduates from High School and again from College with a degree. Both places are full of experiences and filled with numerous memories. The government runs them. They both play an important role in making a person into a collected individual and a member of a society. High School students know that there are differences between High School and College, but sometimes what they think is not how it is. To begin with there are many ways in which the attitudes of the teachers in High School differ from the attitudes of the teachers in College. In High School, my teachers seemed to be stricter and have more rules for the students to follow. There was an everyday time schedule for each student to go by. Students go through drama in High School which some cannot get out of. Attendance is very important in High School as well as in College. Many teachers enforce it while others do not. I have noticed that it is the studentââ¬â¢s responsibility to come to class. They believe that the students should be mature enough to make their own decision on whether to attend class or not and leave it to them to make that decision. When a student graduates from High School, a sense of maturity comes in them. They start realizing that everything in High School was materialistic, and College is practical. College is different than High School just by the personal freedoms, the classroom and the social life. In College, no one would be concerned about the basic everyday drama that would surround a student in High School. College prepares a student to face the real world, and how to handle it. It separates the mature people from the immature people. However, a person who wants to attend College has to pay to further her education. If a student doesnââ¬â¢t take College seriously and apply herself, she knows she wasted her hard earned money, or her parents. So, since students must pay to get into College, she works and studies harder than she did in High School. Therefore, she will study those required courses and finish her education with a degree and start a career. I donââ¬â¢t think I would ever want to go back to High School. I love College and all the freedom that comes with it. All there is in College is education. Now I am learning to be a better person and to improve and to learn different study habits. High School is only the first step into growing up and preparing you for College, whereas, College is preparing you for your career.
Friday, August 16, 2019
Is the Earth large or small? Essay
Any information concerning the size of the earth is likely to refers to this aspect its description within the context of relativity. As one of the planets in the solar system, the earth is large relative to its planetary counterparts. It is the largest and most massive of the terrestrial planets (which include Mars, Venus, and Mercury) within the solar system. In addition, the earth is also denser than the other planets within its solar system. However, compared with the non-terrestrial planets (Saturn, Jupiter, Uranus and Neptune) the earth is very small. In comparison with the sun, the earth is tiny. The mass of the earth is 5. 9736 X 1024 kg. This, compared with the mass of the sun is 1. 99 X 1030 kg, which is 332,946 times that of the earth. On the size scales within the solar system, therefore, the earth might be considered medium sized. However, since the sun is quite miniscule compared to other stars and to the physical bodies within and beyond the galaxy, the sizes of the earth on a universal scale approaches the infinitesimal. 2. What are the major differences between parallels and meridians? Parallels or latitudes differ from meridians primarily in the directions in which they run. While parallels always run east-west, meridians run north-south in a way that allows each to cut (cross) each parallel at a different angle. This is because meridians all run through the axes of the earth, and this ensures that they all converge upon the poles. The parallels or latitudes run parallel to each other, and this ensures that they never meet each other in their journeys around the earth. One effect that this difference (in parallelism) has on the two types of lines is that while parallels are always equidistant from the equator and poles at every point on its circumference, meridians change their distances from each other the closer or further away they are from the poles. Therefore, at the equator, the distance between any two given meridian will always be greater than at any other latitude on the earth. 3. Why are vertical rays of the Sun never experienced poleward on the tropic lines? The sunââ¬â¢s vertical rays are experienced only between 23. 5oN and 23. 5oS primarily as a result of the tilt of the earthââ¬â¢s axis. This tilt measures 23. 5 degrees, so as the earth revolves around the sun, its poles tilt toward or away from the sun at this angle. During the summers (which alternate between opposing parts of the year in for the northern and southern hemispheres), the poles are tilted toward the sun. However, the angle this causes the earth to make with the sun ensures that the angles of the sun-rays hitting the earth are less than the 90 degrees which would constitute a direct hit. Because of this tilt, the rays of the sun are sometimes able to shine directly on such parts of the earth that always between the latitudes that remain in the direct path of the rays after the 23. 5o tilt. The further north or south of these latitudes one goes, the less of a direct contact the earth makes with the sunââ¬â¢s rays. In fact, the extreme of this is that very close to the poles at certain times of the year, the sunââ¬â¢s light is not seen at all. 4. On which day of the year do the vertical rays of the Sun strike the farthest north of the Equator? What is the latitude? Why? The days on which the sunââ¬â¢s vertical rays hit the earth at the angle farthest from the equator is approximately December 22. This is known as the Winter Solstice, and describes the time when the Northern Hemisphere experiences its shortest daytime period (or longest night-time period). The latitude at which this occurs is the 23. 5oN, which represents the latitude of the Tropic of Cancer. This occurs primarily because of the earthââ¬â¢s axial tilt, which is about 23 degrees toward or away from the sun. At the time of the Northern Hemisphereââ¬â¢s Winter Solstice, the earth is tilted away from the sun, yet the sunââ¬â¢s direction from the earth at that time compensates for that tilt so that its rays hit at the spot farthest north that is possible at any given time. This ââ¬Å"spotâ⬠occurs at 23o north of the equator. 5. Explain the implications of the statement, ââ¬ËNo map is totally accurate. ââ¬â¢ According to mapping standards held by the Unites States (and likely by other countries), maps have to maintain accuracy within a given scale. For example, for scales where one (1) inch on the map represents 24,000 inches on land (or sea), the inaccuracy level of the map should not exceed 1/50th of an inch in more than 10% of the points (USGS). These standards are based upon the premise or understanding that no map can be completely accurate. However, what this means is that at minute scales on the ground or sea, it becomes impossible to locate things with a large degree of accuracy. This can be seen more clearly when it is known that 1/50th of an inch on a 1:24,000 scale represents 40 feet (USGS). Therefore, in important expeditions that require map use, a user may expect to be ignorant concerning the exact location of a designated point within at least a 40-foot radius. 6. A globe can portray Earths surface more accurately than a map, but globes are rarely used. Why? Globes are more accurate than maps because, while the map distorts the latitude lines, the shapes of its landmasses and other features, these are kept in true to form on globes. However, globes are rarely used because of their three-dimensional natures that make them more difficult to navigate than two-dimensional maps. The shapes made by the intersection of parallels and meridians are also less like simple geometrical shapes. Because of the way in which the latitude lines are portrayed on maps (as vertical and parallel, thereby creating the illusion of squares) these are usually more suited to calculations done by the lay person or navigator. These parallel latitudes represent not real latitude lines but what has been termed loxodromes (also known as rhumb lines). These rhumb lines actually represent the constant bearing of a compass and calculations using these lines make it easier for navigators to determine the direction of their courses (Rosenberg). Maps are also more intuitively like humans view the surface of the earth. From our perspective, it does not appear to be a sphere, but a large expansive area. Therefore, maps accord more to our everyday experience and are easier for humans to translate. 7. Distinguish between GPS and GIS. Provide ways in which these tools can be useful to physical geographers. The Global Positioning System or GPS is a system that facilitates the location of objects or areas on or around the earth based on a group of satellites which have been launched into the earthââ¬â¢s orbit at about 11,000 miles (Corvallis). This differs from a GIS, which is a Geographical Information Systemââ¬âa database that holds the location of a large number of locations on the earth. The difference between the two lies in that while the GPS is the system for mapping an object, the GIS is the actual object that whose position is being mapped. The GPS system is of immense importance because of the level of accuracy it provides whether on the scales required by navigators or those required for geodesic positioning (ISSA). GIS allows geographers to be able to know, map, and locate specific regions or objects on the earths surface. It also allows them to chart paths from one location to the next by accurately calculating vectors that denote the relative distances and directions between given locations. The GPS continually expands the data available by embodying the technology that allows new places to be located and pin-pointed. Works Cited Corvallis. ââ¬Å"Introdiction to the Global Positioning System for GIS or TRAVERSE. â⬠CMTINC. com.Corvallis, OR: Corvallis Microtechnology Incorporated. http://www. cmtinc. com/gpsbook/index. htm ISSA. ââ¬Å"The Global Information System. â⬠The International Strategic Studies Association. 2004. http://128. 121. 186. 47/ISSA/gis/index. htm Rosenberg, Matt. T. ââ¬Å"Peters Map vs. Mercator Map. â⬠About Geography. New York: New York Times Company. http://geography. about. com/library/weekly/aa030201b. htm USGS. ââ¬Å"Map Accuracy Standards. â⬠United States Geographical Survey. Reston: U. S. Department of the Interior. 1999. http://erg. usgs. gov/isb/pubs/factsheets/fs17199. html
Subscribe to:
Posts (Atom)