Wednesday, October 30, 2019

Many Nations Native Americans Essay Example | Topics and Well Written Essays - 1000 words

Many Nations Native Americans - Essay Example There is a group of Cherokee people that want to stay in their homelands spearheaded by Principal Chief John Ross. The opposition to the removal of the Cherokees was justifiable and was based on a valid argument. Initially, all the Cherokees were united in opposing the removal from their ancestral homelands. Even after the 1832 court ruling that Cherokees should be allowed to live in their ancestral lands, the government has not heeded. The land lottery that was enacted in 1830 is being implemented, where citizens of Georgia are the beneficiaries of the Cherokee’s land. The Cherokees attempted to fight for themselves with the government on the opposition. Despite some of the Cherokees having no hope in that they will regain their land, a group of them was totally opposed to the removal. One of the strong believers that the Cherokees should not be removed from their homeland was Principal Chief John Ross. He had support from the majority of the people. However, a rift among the people created instability to the Cherokee government. Various advantages helped the group that was opposed to the removal to be dominant and stronger. Firstly, under the Principal Chief John Ross they had a control of the Cherokee government. This means that rebels were thrown out of the government once they were known. Secondly, they were the majority; in this case, the people behind Principal Chief John Ross were much more than those that attempted to collaborate. This ensured that the rebellion became stronger. Thirdly, the elite among the Cherokees supported non-removal. These include Principal Chief John Ross, his brother among other leaders that were more enlightened. Despite their concerted efforts, the non-removal delegation was defeated because both the federal and state governments supported it. Upon the ratification of the Treaty of new Echota by the Senate, the battle was lost despite the push by Ross and his leadership. In

Monday, October 28, 2019

Major Trends Which Affect Microprocessor Information Technology Essay

Major Trends Which Affect Microprocessor Information Technology Essay In the first section I selected the question about Memory Management Unit of Linux operation system. In this section I described the strategies and mechanism used by Memory Management, problems faced by these techniques and solutions to overcome it. In the section number two I chose the question about microprocessor. This question discussed how microprocessors work, major trends affecting to their performance, differences between microprocessors design goals for laptops, servers, desktops and embedded systems. 2 Section1: Linux Operating System Introduction Linux, one of the free open source operating system does sufficient memory management activities to keep the system stable and users demand for errors free. As processes and threads executes, they read instructions from memory and decode it. In such act, instructions would be fetched or store contents of a location in a memory. Then, the processor would execute the instructions which in either way the memory would be accessed in fetching instructions or storing the data. Linux uses a copy-on-write scheme. If two or more programs are using the same block of memory, only one copy is actually in RAM, and all the programs read the same block. If one program writes to that block, then a copy is made for just that program. All other programs still share the same memory. Linux handles memory in such a way that when RAM is not in use, the operating system uses it as disk cache. Below diagram illustrate a brief overview of Linux operating system. C:UsersuserDesktopimages.jpg 3 Memory Management The term memory management refers to the one of the most important parts of the operating system. It consider in provision of memory-related services to applications. These services include virtual memory (use of a hard disk or other non-RAM storage media to provide additional program memory), protected memory (exclusive access to a region of memory by a process), and shared memory (cooperative access to a region of memory by multiple processes). Linux memory management does use the platform of Memory Management Unit which translate physical memory addresses to liner ones used by the system and page fault interrupt are requested when the processor tries to access to memory that is not entitled to. Virtual Memory Virtual memory of Linux is using a disk as an extension of RAM therefore that the effective size of convenient memory grows respectively. The kernel will write the substance of a currently dormant block of memory to the hard disk so that the memory can be used for another function. When the original contents are necessary again, they are read back into memory. This is all made completely transparent to the user; programs running under Linux only see the larger amount of memory available and dont notice that parts of them reside on the disk from time to time. Obviously, reading and writing the hard disk is slower (on the order of a thousand times slower) than using real memory, so the programs dont run as fast. The part of the hard disk that is used as virtual memory is called the  swap space. Virtual memory system consist of all virtual addresses not physical addresses. These virtual addresses are transformed into physical addresses by the processor based on information held in a set of tables maintained by the operating system. To make this conversion easier, virtual and physical memory are shared into handy sized pieces called pages. These pages are all the same size, if they were different size, the system would be very hard to administer 4 The schemes for Memory Management The simplicity of Linux memory model facilitates program implementation and portability in different systems. There exist two schemes for implementation of memory management in Linux; 1.Paging 2.Swapping Paging Demand Paging Currently, saving is done using physical memory by virtual pages when compiling a program. In latter case when a program runs to query a database, not all database will respond, but only those with data records to be checked. For instance a database request for search query will only be loaded and not database with programs that works to add new records. This is also referred to as demand paging. The purpose of using demand paging is to load performing images into a process of virtual memory. Every time when a command is accomplished, the file containing it is opened and its contents are displayed into the processs virtual memory. Memory mapping is executed by modifying the data structure which is describing this process. Even so the rest of the image is left on disk ,only the first part of the image is actually sent into physical memory. Linux uses memory map to identify parts of image to load into memory by generating page faults as the image executes. 5 C:UsersfoxDesktopmmu-vs-iommu-memory.png Page Faults Page fault exception are generated when a process tries to access an unknown page to memory management unit. The handler goes further in examining the currently running process`s memory information and MMU state, then determines whether the fault is good or bad. As good page faults cause the handler to give more memory to the process, the bad faults invoke the handler to terminate the process. From good page faults are expected behaviour to whenever a program allocates a dynamic memory to run a section of code, write a for the first time a section of data or increases its stack size. In such a case when a process tries to access this newly memory, page fault is declared by MMU and the system adds a fresh page of memory to the process`s table. The interrupted process is the resumed. In cases where a process attempt to access a memory that its doesnt own or follows a NULL pointer then bad faults occur. Additionally, it could also be due to bugs in the kernel in which case the handler w ill print an oops information before terminates/killing the process. 6 Swapping Linux separates its physical RAM (random access memory) into pieces of memory called pages. The process of Swapping is accomplished by copying a page of memory to the preconfigured space on the hard disk, known as a swap space, to exempt that page of memory. The combined sizes of the physical memory and the swap space is the amount of virtual memory available. Swapping is done mainly for two reasons; One is insufficient memory required by the system when physical memory is not available. The kernel does swaps out the less used pages and supply the resources to currently running processes. Second, a significant number of the pages used by an application during its start-up phase may only be used for initialization and then never used again. The system can swap out those pages and free the memory for other applications or even for the disk cache. Nevertheless, swapping does have a disadvantage. If Compare with memory, disks are very slow. For example, memory speeds are measured in nanoseconds, but disks are measured in milliseconds, so admittance to the physical memory can be significantly faster than accessing disk. It depends how often swapping occurs, if it happens frequently your system will be slower. Sometimes excessive swapping or thrashing occurs where a page is swapped out and then very soon swapped in and then swapped out again and so on. In such situations the system is struggling to find free memory and keep applications running at the same time. In this case only adding more RAM will help. There are two forms of swap space: the swap partition and the swap file. The swap partition is a substantive section of the hard disk which is used only for swapping; other files cannot locate there. A special file in the file system which stands amongst your system and data files called a swap file. 7 Problems of virtual memory management in Linux There are several possible problems with the page replacement algorithm in Linux , which can be listed as follows: à ¢Ã¢â€š ¬Ã‚ ¢ The system may react badly to variable VM load or to load spikes after a period of no VM activity. Since the Kswapd, the page out daemon, only scans when the system is low on memory, the system can end up in a state where some pages have reference bits from the last 5 seconds, while other pages have reference bits from 20 minutes ago. This means that on a load spike the system have no clue which are the right pages to evict from memory, this can lead to a swapping storm, where the wrong pages are evicted and almost immediately after towards faulted back in, leading to the page out of another random page, etc. à ¢Ã¢â€š ¬Ã‚ ¢ There is no method to prevent the possible memory deadlock. With the arrival of journaling and delay allocation file systems it is possible that the systems will need to allocate memory in order to free memory, that is, to write out data so memory can become free. It may be useful to introduce some algorithm to prevent the possible deadlock under extremely low memory situation. Conclusion All in all, Linux memory management seems to be effective than before and this is based on the assumption that Linux has less applications that it runs as to compared to windows machines which has more users and more applications. Beside, the system may react badly to variable VM load However, regular updates from Linux has managed to lessen the bugs. Swapping does require more disk memory in case the physical memory is insufficient to serve more demanding applications and if the disk space is too low the user runs the risk of waiting or kill other process for other programs to work. Additionally, resuming the swapped pages may result into corrupted data, but Linux has been in upper hand to solve such bugs. 8 Frequently Ask Questions What is the main goal of the Memory Management? The Memory Management Unit should be able to decide which process should locate in the main memory; should control the parts of the virtual space of a process which is non-core resident; responsible for monitoring the available main memory and for the writing processes into the swap device in order to provide more processor fit in the main memory at the same time. What is called a page fault? Page fault appear when the process addresses a page in the working set of the process but the process is not able to locate the page in the working set. To overcome this problem kernel should updates the working set by reading the page from the secondary device. What is the Minimum Memory Requirement? Linux needs at least 4MB, and then you will need to use special installation procedures until the disk swap space is installed. Linux will run comfortably in 4MB of RAM, although running GUI apps is impractically slow because they need to swap out to disk. 9 Section 2: Microprocessor Introduction Microprocessor incorporates all or most of the functions of Central Processor Unit (CPU) on a single integrated circuit, so in the world of personal computers, the terms microprocessor and CPU are used interchangeably. The microprocessor is the brain of any computer, whether it is a desktop machine, a server or a laptop. It processes instructions and communicates with outside devices, controlling most of the operation of the computer. How Microprocessors Work Microprocessor Logic A microprocessor performs a collection of machine instructions that tell the processor what to do. A microprocessor does 3 main things based on the instructions: Using its ALU (Arithmetic/Logic Unit), a microprocessor is able to perform mathematical operations like addition, subtraction, multiplication and division. A microprocessor is able to move data from one memory location to another. A microprocessor is able to make decisions and jump to a new set of instructions based on those decisions. 10 The following diagram shows how to extremely simple microprocessor capable of doing of 3 jobs. The microprocessor contains: An address bus that sends an address to memory A data bus that can sends data to memory or receive data from memory A RD (read) and WR (write) line to tell the memory whether to set or get the address A clock line lets a clock pulse sequence the processor A reset line that resets the program counter to zero and restarts execution 11 Here the explanation of components and how they perform: Registers A, B and C are kind of latches that made out of flip-flops The address latch is just like registers A, B and C. The program counter is a latch with the extra capacity to increment by 1 when or reset to zero it is needed. Major trends which affect microprocessor performance and design Increasing number of Cores: A dual-core processor is a  CPU  with two processors or execution cores in the same  integrated circuit. Each processor has its own  cache  and controller, which enables it to function as efficiently as a single processor. However, because the two processors are linked together, they can perform operations up to twice as fast as a single processor can. The Intel Core Duo, the AMD X2, and the dual-core PowerPC G5 are all examples of CPUs that use dual-core technologies. These CPUs each combine two processor cores on a single silicon chip. This is different than a dual processor configuration, in which two physically separate CPUs work together. However, some high-end machines, such as the PowerPC G5 Quad, use two separate dual-core processors together, providing up to four times the performance of a single processor. 12 Reducing size of processor Size of the processor the one of the major trend what is affecting to the processor in last years time. When the processor becoming small there will be many advantages like it can include many cores to a processor, it will protect energy, it will increase its speed also. 45nm Processor Technology Intel has introduced 45nm Technology in Intel Core 2 and Intel Core i7 Processor Family. Intel 45nm High-K Silicon Processors contain Larger L2 Cache than 65nm Processors. 32nm ProcessorTechnology At research level Intel have introduced 32nm processor (Code Name Nehalem- based Westmere) which will be released in 2nd quarter of 2009 Energy saving Energy is one of the most important resources in the world. Therefore we must save and protect it for future purpose. The power consumption in microprocessor would be one of the major trends. For instance, Intel Core 2 family of processors are very efficient processor, they have very intelligent power management features, such à Ã‚ °s, ability to deactivate unused cores; it still draws up to 24 watts in idle mode. 13 High speed cache and buses In Past year Microprocessor Manufactures like Intel has introduced new cache technologies to their processors which can gain more efficiency improvements and reduce latency. Intel Advanced Smart Cache technology is a multicore cache that reduce latency to frequency used data in modern processor the cache size is increased up to 12MB installing a heat sink and microprocessor 14 Differences between Microprocessors Servers Originally the microprocessor for server should give uninterrupted time and stability with low power consumption and less resources allocating processor for System Cache. Thats why most of the time they use Unix and Linux as the Server based operating systems, because they take less amount of hardware resources and use effectively so the heat which dispatches from the processor is less and the heating would be less. Desktop Processors The desktop microprocessors are a bit different from server microprocessors, because they are not very much concerned of power consumption or use less resources of Operation system. The goal of Desktop microprocessors is to deliver as much performance as possible while keeping the cost of the processor low and power consumption within reasonable limits. Another important fact is out there, it is most of the programs which are being used in desktop machines are designed to do long time processor scheduling jobs like rendering a high definition image, or compiling a source file. So the processors are also designed to adopt those kinds of processing. Laptop Processor The CPU produces a lot of heats, in the desktop computers there are a systems of fans, heat sinks, channels and radiators that are uses to cool off the computer. Since laptop has small size, and far less room for any cooling methods, the CPU usually: Runs at a lower voltage and clock speed (reduces heat output and power consumption but slows the processor down) Has a sleep or slow-down mode (when the computer is not in use or when the processor does not need to run as quickly the operation system reduces the CPU speed) 15 Embedded Microprocessors Most of the embedded devices using Microcontrollers instead of separate Microprocessors; they are an implementation of whole computer inside a small thumb size chip called Microcontroller. These microcontrollers are varying its performance due to battery consumption and Instruction length issues. Most of them are designed using RISC architecture to minimize the complexity and the number of instructions per processor. Embedded device processors have high speed potential but the problem they are having is high power consumption and heating. Conclusion Current technology allows for one processor socket to provide access to one logical core. But this approach is expected to change, enabling one processor socket to provide access to two, four, or more processor cores. Future processors will be designed to allow multiple processor cores to be contained inside a single processor module. 16 Frequently Ask Questions: 1. How does the operating system share the cpu in a multitasking system? There are two basic ways of establishing a multitasking environment; times lice and priority based. In a a times lice multitasking environment each application is given a set amount of time (250 milliseconds, 100 milliseconds, etc) to run then the scheduler turns over execution to some other process. In such an environment each READY application takes turns, allowing them to effectively share the CPU. In a priority based environment each application is assigned a priority and the process with the highest priority will be allowed to execute as long as it is ready, meaning that it will run until it needs to wait for some kind of resource such as operator input, disk access or communication. Once a higher priority process is no longer ready, the next higher process will begin execution until it is no longer ready or until the higher priority process takes the processor back. Most real-time operating systems in use today tend to be some kind of combination of the two. 2.What is a multi-core? Two or more independent core combined into a single package composed of a single integrated circuit is known as a multi-core processor. 3. What is the difference between a processor and a microprocessor? generally, processor would be the part of a computer that interprets (and executes) instructions A microprocessor, is a CPU that is in just one IC (chip). For example, the CPU in a PC is in a chip so it can also be referred to as microprocessor. It has come to be called a microprocessor, because in the older days processors would normally be implemented in many ICs, so it was considered quite a feat to include the whole CPU in one chip that they called it a Microprocessor 17

Friday, October 25, 2019

Relationship between Britain and the United States during the Eden and

SINCE THE END OF WORLD WAR II, A ROMANTICISED ‘SPECIAL RELATIONSHIP’ between the United States and Britain has been referenced on countless occasions in speeches, books, and essays by academics and statesmen on both sides of the Atlantic.   The relationship has multiple definitions, with no precise doctrine or formal agreement that outlines its tenets, and has been apparent in a myriad of interactions between the two countries. It is visibly apparent culturally as the United States evolved from a nucleus of British settlers to become an English-speaking country, sharing with Great Britain ‘joint aims’ and a ‘common heritage’, as is often referenced in political rhetoric, and by David Watt in his introduction to the book The Special Relationship (D. Watt 1).   Yet this perceived relationship between these two countries has gone beyond a joint appreciation for the literature of William Shakespeare and the flavour of a Burger King Whopper to become manifest in political and military relations between the United States and Britain. Winston Churchill was first to prominently recognise an Anglo-American ‘special relationship’, stating in the years immediately following World War II that he saw the relationship between the US and the UK as an ‘alliance of equals’, according to Sir Michael Howard in the Afterward of The Special Relationship (Howard 387).   Howard writes that Britain in general saw the ‘special relationship’ as a vehicle for the United States ‘to accept and underwrite Britain’s status as a coequal world power’ (387). As time passed, however, Britain’s standing a Great Power quickly diminished.   Despite this, British possession of nuclear weapons, United Nations Security Council membership, access to political an... ...Ernest R. and Gregory F. Treverton.   ‘Defence Relationships: American Perspectives’. The Special Relationship.   Ed. William Rogers Louis and Hedley Bull.   Oxford:   Clarendon Press, 1986.   161-184. Perkins, Bradford.   ‘Unequal Partners: The Truman Administration and Great Britain’. The Special Relationship.   Ed. William Rogers Louis and Hedley Bull.   Oxford:   Clarendon Press, 1986.   43-64. Rothwell, Victor.   Anthony Eden.   Manchester: Manchester U.P., 1992. Walker, Martin.   The Cold War.   London:   Fourth Estate Ltd., 1993. Watt, D. Cameron.   ‘Demythologising the Eisenhower Era’. The Special Relationship.   Ed. William Rogers Louis and Hedley Bull.   Oxford:   Clarendon Press, 1986.   65-86. Watt, David.   ‘Introduction: The Anglo-American Relationship’.   The Special Relationship.   Ed. William Rogers Louis and Hedley Bull.   Oxford:   Clarendon Press, 1986.   1-16.

Thursday, October 24, 2019

Principle of Management Essay

From Scientific to Administrative Back around 1860, Henri Fayol, a then-young engineer, began working at a coal mine in France. While working at the mines, he noticed that managing the miners was not an easy job. Managing was not as effective as it could be. Managers had few resources and tools to better manage people. At the time, Frederick Winslow Taylor, founder of the school of scientific management, was making strides in maximizing productivity by focusing on the work and worker relationship. In other words, Taylor believed that there was a science to work. If workers worked more like machines, there would be increased productivity. Frederick Winslow Taylor founded the school of scientific management Unlike Taylor’s scientific management theory, Fayol believed that it was more than just work and workers. Managers needed specific roles in order to manage work and workers. This became known as the administrative school of management and was founded on the six functions, or roles, of management: 1.Forecasting 2.Planning 3.Organizing 4.Commanding 5.Coordinating 6.Controlling Principles 1-7 These roles, used as a process, focused on the entire organization rather than just the work. Once broken down into smaller parts, the six functions evolved into Fayol’s 14 Principles of Management. In this lesson, we will focus on the first seven principles: 1.Division of Work 2.Authority 3.Discipline 4.Unity of Command 5.Unity of Direction 6.Subordination of Individual Interests to the General Interest 7.Remuneration While Fayol’s 14 Principles of Management are not as widely used as they once were, it is important to understand how the foundation of administrative management theory was developed to address the needs of the times. This macro approach was the first of its time. Let’s not forget, Taylor did not focus on the human element. Henri Fayols principles of management focus on the human element His scientific approach to work focused on building a better, stronger, faster and more productive team through physical elements. Fayol didn’t see it that way. Fayol saw workers as humans possessing elements that required a more general approach to getting the work done. He saw it as a whole organizational effort. Principles Explained Let’s take each principle and use examples to better understand how these principles work together to create an administrative management mindset. Let’s use Fayol and the Principles, a rock band, to help us better understand the first seven of the 14 Principles of Management. 1. Division of Work: When employees are specialized, output can increase because they become increasingly skilled and efficient. Fayol and the Principles is made up of four members, including Fayol. Each band member specializes in a specific instrument or talent. Fayol is the lead singer, while the other members play instruments. The band is able to produce quality music because each performs the job in the band that he or she is most specialized in. If we were to mix it up a bit and put Fayol on bass guitar and another member on singing – neither of whom possesses the skill to perform the job – the sound would be much different. 2. Authority: Managers must have the authority to give orders, but they must also keep in mind that with authority comes responsibility. Fayol and the Principles understand that they should specialize in their specific areas; however, there needs to be a leader. Fayol assumes the role as leader and gives everyone orders. He says ‘Play this. Do that.’ But with that comes responsibility. He knows that, whatever task he delegates to the band, he must make sure that the task is completed, that the task is done in a productive way and that it yields results. 3. Discipline: Discipline must be upheld in organizations, but methods for doing so can vary. From time to time, the band members do not perform to Fayol’s standard. Even though Fayol looks at the organization as a whole organizational effort, he also knows that he must administer discipline for ineffectiveness. Two of Fayol’s band members decided to take a break from practice to play a competitive game of Pin the Tail on the Donkey. He must administer swift discipline in line with the offense. He also knows that there is no one discipline that can be levied against the band members. It must be done on a case-by-case basis. In this case, the two band members were penalized pay for the time spent playing a game when they should have been practicing for their show. 4. Unity of Command: Employees should have only one direct supervisor. Multiple people sometimes give orders. In the case of the rock band, Fayol is in charge. This is expressed by the name of the band and implied by the orderly way in which work is delegated. Fayol is the only person to give direction. 5. Unity of Direction: Teams with the same objective should be working under the direction of one manager and using one plan. This will ensure that action is properly coordinated. Just like unity of command, it is important for Fayol to keep the band on a single track, course or direction. One manager. One plan. One vision. 6. Subordination of Individual Interests to the General Interest: The interests of one employee should not be allowed to become more important than those of the group. This includes managers. Fayol knows how to maintain a balance between personal endeavors and those of the greater good. Fayol and the Principles are a rock band. This is their purpose, their identity. If one of the members feels differently, regardless of how strongly he feels, this self-interest, or individual interest, is not more important than those of the band and its members. 7. Remuneration: Employee satisfaction depends on fair remuneration for everyone. This includes financial and non-financial compensation. When it comes to payday, Fayol knows that he must pay the band and pay them fairly. This includes money and perks. It is tempting to take all of the backstage perks and keep them for himself, like free T-shirts and sodas, but by sharing the rewards, Fayol has a much more satisfied team. Lesson Summary In summary, Fayol’s 14 Principles of Management serve the organization as a whole. By dividing the work into specialized and specific jobs, workers are able to work more efficiently. Small management units who oversee functional areas of the organization are now able to assign work and hold workers accountable for their production. This makes it easier to measure productivity. Once a system of accountability is in place and productivity can be monitored, it is easier to determine who is performing and who is not performing. Managers are able to selectively and individually discipline workers who fall short of goals quickly and in the correct measure. Having just one manager assigned to a team takes away any task confusion. Workers have only one supervisor directing them. With only one supervisor directing work, it is easy to motivate employees to buy into one plan. This minimizes self-interest. With only one manager managing the work of one team, which shares one vision, compe nsating the team can be done fairly.

Wednesday, October 23, 2019

Philosophical concept Essay

The Coyote Ugly last 2000: Shy, aspiring songwriter Violet overcomes her stage fright, gets the man of her dreams and is offered a major recording contract after making drinks on top of a bar, clad basically in her skivvies. The movie Coyote Ugly inspire people about the striving of one person to fulfill her dream and overcoming all the hardship in life. The straggle of one person to overcome her fears with the help of people surrounding her in order to survive the big city. This movie symbolizes the growth of a person physically, emotionally, and socially. In this film the client will be immersed in a process of in-depth examination of the meaning and power of images. Images, in fact, are never neutral; their effect is that of conditioning the observer. It is vital, thus, to carry out an analysis of what an image actually is. Image based thinking will be considered both in relation to the creative process and to problem solving. The counseling will start with the philosophical concept of an idea as a mental representation. The symbolism of dreams should be evaluated in order to understand the close relationship between images and the unconscious. The psychoanalytical proposition that a film may be considered the film director’s dream will be closely scrutinized. The concept of Cinematherapy is rooted in the awareness that the film viewer is conditioned by his or her individual life experience, and this in turn makes the viewer’s perception highly unique. The viewer’s intimate interior world and life experiences condition perception of the film, and result in a highly subjective interpretation. This is due to the fact that the viewer assimilates only certain images and edits out others, all on the basis of unique individual experiences. The study of Cinematherapy will enable the viewer to understand the underlying causes of certain emotions experienced during a film, and will shed light on certain aspects of one’s own personality and on how others view the same situation. â€Å"Mining the gold† in movies means uncovering our hidden finest attributes by understanding how we project these virtues on film heroes and heroines. Identifying with a character can help us to develop inner strength as we recall forgotten inner resources and become aware of the right opportunity for those resources to be applied. Like dream work, cinema therapy allows us to gain awareness of our deeper layers of consciousness to help us move toward new perspectives or behavior as well as healing and integration of the total self. As observing helps us to â€Å"step back†, the bigger picture becomes more obvious. This way, watching screen movies helps us learn to understand ourselves and others more deeply in the â€Å"big movie† of our life. In identifying the presenting problems and goals for therapy the questions that are appropriate are by asking yourself, why would this client benefit from a cinematherapy intervention? What would this client get out of it? Can the intervention be tied back to the treatment plan? In assessing clients’ strengths such as interests, hobbies, activities, and type of employment the questions are What type of film would benefit this client: Standard movie, documentary, or instructional? What type of genre would they prefer: Comedy, drama, or science fiction? In determination of the clients’ ability to understand the content of the film and recognize similarities and differences between themselves and the characters the questions are Will the client understand how to use the film as metaphor for their own life? Do they have the mental capabilities to participate in processing the content? Do they recognize the difference between fantasy and reality? By taking into account issues of diversity when choosing a film the questions suitable is Will the film be offensive, or be distracting from the real purpose of the assignment? The therapist should always watch the movie before assigning it. This is so you can process the movie with the client, or know the significant parts to discuss later. Also, it saves time from apologizing for a scene that offended the client or his/her parents. Preparation is essential to cinematherapy. The therapist should provide clients with a rationale for assigning a film. REFERENCE Berg-Cross L. , Jennings, P. & Baruch, R . (1990). Cinematherapy: Theory and Application . In: Psychotherapy in Private Practice 8, no. 1, 135-157 Peske, Nancy and Beverly West (1999) Cinematherapy: The Girl’s Guide to Movies For Every Mood. New York: Dell Ulus, Fuat(2003) Movie Therapy, Moving Therapy! The Healing Power of Film Clips in Therapeutic Settings. New Bern: Trafford Publishing. Tyson, L. , Foster, L. , & Jones, C. (2000). The process of cinematherapy as a therapeutic intervention. Alabama Counseling Association Journal,26(1), 35-41