Tuesday, January 28, 2020

Allocation of Resources in Cloud Server Using Lopsidedness

Allocation of Resources in Cloud Server Using Lopsidedness B. Selvi, C. Vinola, Dr. R. Ravi Abstract– Cloud computing plays a vital role in the organizations resource management. Cloud server allows dynamic resource usage based on the customer needs. Cloud server achieves efficient allocation of resources through virtualization technology. It addresses the system that uses the virtualization technology to allocate the resources dynamically based on the demands and saves energy by optimizing the number of server in use. It introduces the concept to measure the inequality in multi-dimensional resource utilization of a server. The aim is to enlarge the efficient resource utilization system that avoids overload and save energy in cloud by allocating the resources to the multiple clients in an efficient manner using virtual machine mapping on physical system and Idle PMs can be turned off to save energy. Index Terms-cloud computing, resource allocation, virtual machine, green computing. I. Introduction In cloud computing provides the service in an efficient manner. Dynamically allocate the resources to multiple cloud clients at the same time over the network. Now-a-Days many of the business organizations using the concept of cloud computing due to the advantage with resource management and security management. A cloud computing network is a composite system with a large number of shared multiple resources. These are focus to unpredictable needs and can be affected by outside events beyond the control. Cloud resource allocation management requires composite policies and decisions for multi-objective optimization. It is extremely difficult because of the convolution of the system, which makes it impracticable to have accurate universal state information. It is also subject to continual and unpredictable communications with the surroundings. The strategies for cloud resource allocation management associated with the three cloud delivery models, Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS), differ from one another. In all cases, the cloud providers are faced with huge, sporadic loads that contest the claim of cloud flexibility. Virtualization is the single most efficient way to decrease IT expenses while boosting effectiveness and liveliness not only for large enterprise, but for small and mid budget organizations also. Virtualization technology has advantages over the following aspects. Run multi-operating systems and applications on a single computer. Combine hardware to get hugely higher productivity from smaller number of servers. Save 50 percent or more on general IT costs. Speed up and make things easier IT management, maintenance, and the consumption of new applications. The system aims to achieve two goals: The capability of a physical machine (PM) should be enough to satisfy the resource requirements of all virtual machines (VMs) running on it. Otherwise, the Physical machine is overload and degrades performance of its VMs. The number of PMs used should be minimized as long as they can still satisfy the demands of all VMs. Idle physical machine can be turned off to save energy. There is an intrinsic exchange between the two goals in the face of altering resource needs of VMs. For overload avoidance, the system should keep the utilization of PMs Low to reduce the possibility of overload in case the resource needs of VMs increase later. For green computing, the system should keep the utilization of PMs reasonably high to make efficient use of their energy. It presents the design and implementation of an efficient resource allocation system that balance between the two goals. The following aids are, The development of an efficient resource allocation system that avoids overload in the system effectively while minimizing the number of servers use. To introduce the concept of â€Å"lopsidedness† to measure the uneven utilization of a server. By minimizing lopsidedness, the system improves the overall utilization of servers in the face of multidimensional resource constraints. To implement a load prediction algorithm that can capture the future resource usages of applications accurately without looking inside the VMs. Fig.1 System Architecture II. System Overview The architecture of the system is presents in Fig.1. The physical machine runs with VMware hypervisor (VMM) that supports VM0 and one or more VMs in cloud server. Each VM can contain one or more applications residing it. All physical machines can share the same storage space. The mapping of VMs to PMs is maintains by VMM. Information collector node (ICN) collects the information about VMs resource status that runs on VM0. The virtual machine monitor creates and monitors the virtual machine. The CPU scheduling and network usage monitoring is manage by VMM. Assume with available sampling technique can measure the working set size on each virtual machine. The information collects at each physical machine and passes the information to the admin controller (AC). AC connects with VM Allocator that activated periodically and gets information from the ICN resource needs history of VMs, and status of VMs. The allocator has several components. The Indicator Indicates the future demands of virtual machine and total load value for physical machine. The ICN at each node attempts to satisfy the input demands locally by adjusting the resource allocation of VMs sharing the same VMM. The hotspot remover in VM allocator spots if the resource exploitation of any PM is above the Hot Point. If so, then some VMs runs on the particular PM will be move away to another PM to reduce the selected PM load. The cold spot remover identifies the system that is below the average utilization (Cold point) of actively used PMs. If so, then it some PMs turned off to save energy. Finally, the exodus list passes to the admin controller. III. The Lopsidedness Algorithm The resource allocation system introduces the concept of lopsidedness to measure the unevenness in the utilization of multiple resources on a server. Let consider n be the number of resources and let consider ri be the exploitation of the ith resource. To define the resource lopsidedness of a server p by considering r is the average utilization of resources in server p. In practice, not all types of resources are performance critical and then consider bottleneck resources in the above calculation. By minimizing the lopsidedness, the system can combine different types of workloads nicely and improve the overall utilization of server resources. A. Hot and Cold Points The system executes periodically to evaluate the resource allocation status based on the predicted future resource demands of VMs. The system defines a server as a hot spot if the utilization of any of its resources is above a hot threshold. This indicates that the server is overloaded and hence some VMs running on it should be migrated away. The system defines the temperature of a hot spot p as the square sum of its resource utilization beyond the hot threshold. Consider R is the set of overloaded resources in server p and rt is the hot threshold for resource r. (Note that only overloaded resources are considered in the calculation.) The temperature of a hot spot reflects its degree of overload. If a server is not a hot spot, its temperature is zero. The system defines a server as a cold spot if the utilizations of all its resources are below a cold threshold. This indicates that the server is mostly idle and a potential candidate to turn off to save energy. However, the system does so only when the average resource utilization of all actively used servers (i.e., APMs) in the system is below a green computing threshold. A server is actively used if it has at least one VM running. Otherwise, it is inactive. Finally, The system define the warm threshold to be a level of resource utilization that is sufficiently high to justify having the server running but not so high as to risk becoming a hot spot in the face of temporary fluctuation of application resource demands. Different types of resources can have different thresholds. For example, the system can define the hot thresholds for CPU and memory resources to be 90 and 80 percent, respectively. Thus a server is a hot spot if either its CPU usage is above 90 percent or its memory usage is above 80 percent. B. Hot Spot Reduction The system sort the list of hot spots in the system in descending temperature (i.e., the system handle the hottest one first). Our goal is to eliminate all hot spots if possible. Otherwise, keep their temperature as low as possible. For each server p, the system first decides which of its VMs should be migrated away. The system sort its list of VMs based on the resulting temperature of the server if that VM is migrated away. The system aims to migrate away the VM that can reduce the server’s temperature the most. In case of ties, the system selects the VM whose removal can reduce the lopsidedness of the server the most. For each VM in the list, the system sees if the system can find a destination server to accommodate it. The server must not become a hot spot after accepting this VM. Among all such servers, the system select one whose lopsidedness can be reduced the most by accepting this VM. Note that this reduction can be negative which means the system selects the server wh ose lopsidedness increases the least. If a destination server is found, the system records the migration of the VM to that server and updates the predicted load of related servers. Otherwise, the system moves onto the next VM in the list and try to find a destination server for it. As long as the system can find a destination server for any of its VMs the system consider this run of the algorithm a success and then move onto the next hot spot. Note that each run of the algorithm migrates away at most one VM from the overloaded server. This does not necessarily eliminate the hot spot, but at least reduces its temperature. If it remains a hot spot in the next decision run, the algorithm will repeat this process. It is possible to design the algorithm so that it can migrate away multiple VMs during each run. But this can add more load on the related servers during a period when they are already overloaded. The system decides to use this more conservative approach and leave the system s ome time to react before initiating additional migrations. IV. System Analysis In Cloud Environment, the user has to give request to download the file. This request will be store and process by the server to respond the user. It checks the appropriate sub server to assign the task. A job scheduler is a computer application for controlling unattended background program execution; job scheduler is create and connects with all servers to perform the user requested tasks using this module. In User Request Analysis, the requests are analyze by the scheduler before the task is give to the servers. This module helps to avoid the task overloading by analyzing the nature of the users request. Fist it checks the type of the file going to be download. The users request can be the downloading request of text, image or video file. In Server Load value, the server load value is identifies for job allocation. To reduce the over load, the different load values are assign to the server according to the type of the processing file. If the requested file is text, then the minimum load value will be assign by the server. If it is video file, the server will assign high load value. If it is image file, then it will take medium load value. In Server Allocation, the server allocation task will take place. To manage the mixed workloads, the job-scheduling algorithm is follow. In this the scheduling, depends upon the nature of the request the load values are assign dynamically. Minimum load value server will take high load value job for the next time. High load value server will take minimum load value job for next time. The aim is to enlarge the efficient resource utilization system that avoids overload and save energy in cloud by allocating the resources to the multiple clients in an efficient manner using virtual machine mapping on physical system and Idle PMs can be turned off to save energy. Fig. 2 Comparison graph IV. Conclusion It presented by the design, implementation and evaluation of efficient resource allocation system for cloud computing services. Allocation system multiplexes by mapping virtual to physical resources based on the demand of users. The contest here is to reduce the number of dynamic servers during low load without sacrificing performance. Then it achieves overload avoidance and saves energy for systems with multi resource constraints to satisfy the new demands locally by adjusting the resource allocation of VMs sharing the same VMM and some of not used PMs could potentially be turn off to save energy. Future work can on prediction algorithm to improve the stability of resource allocation decisions and plan to explore using AI or control theoretic approach to find near optimal values automatically. References [1] Anton Beloglazov and Rajkumar Buyya (2013), ‘Managing Overloaded Hosts For Dynamic Consolidation of Virtual Machines In Cloud Data Centers Under Quality of Service Constraints’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 7, pp. 1366-1379. [2] Ayad Barsoum and Anwar Hasan (2013), ’Enabling Dynamic Data And Indirect Mutual Trust For Cloud Computing Storage Systems’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 12, pp. 2375-2385. [3] Daniel Warneke, and Odej Kao (2011), ‘Exploiting Dynamic Resource Allocation For Efficient Parallel Data Processing In The Cloud’, IEEE Transactions on Parallel and Distributed Systems, Vol. 22, No. 6, pp. 985-997. [4] Fung Po Tso and Dimitrios P. Pezaros (2013), ‘Improving Data Center Network Utilization Using Near-Optimal Traffic Engineering’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 6, pp. 1139-1147. [5] Hong Xu, and Baochun Li (2013), ‘Anchor: A Versatile and Efficient Framework for Resource Management in The Cloud’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 6, pp. 1066-1076. [6] Jia Rao, Yudi Wei, Jiayu Gong, and Cheng-Zhong Xu (2013), ‘Qos Guarantees And Service Differentiation For Dynamic Cloud Applications’, IEEE Transactions on Network and Service Management, Vol. 10, No. 1, pp. 43-54. [7] Junwei Cao, Keqin Li,and Ivan Stojmenovic (2013), ‘Optimal Power Allocation and Load Distribution For Multiple Heterogeneous Multicore Server Processors Across Clouds and Data Centers’, IEEE Transactions on Computers, Vol. 32, No. 99, pp.145-159. [8] Kuo-Yi Chen, Morris Chang. J, and Ting-Wei Hou (2011), ‘Multithreading In Java: Performance and Scalability on Multicore Systems’, IEEE Transactions On Computers, Vol. 60, No. 11, pp. 1521-1534. [9] Olivier Beaumont, Lionel Eyraud-Dubois, Christopher Thraves Caro, and Hejer Rejeb (2013), ‘Heterogeneous Resource Allocation Under Degree Constraints’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 5, pp. 926-937. [10] Rafael Moreno-Vozmediano, Ruben S. Montero, and Ignacio M. Llorente (2011), ‘Multicloud Deployment Of Computing Clusters For Loosely Coupled MTC Applications’, IEEE Transactions on Parallel and Distributed Systems, Vol. 22, No. 6, pp. 924-930. [11] Sangho Yi, Artur Andrzejak, and Derrick Kondo (2012), ‘Monetary Cost-Aware Checkpointing And Migration on Amazon Cloud Spot Instances’, IEEE Transactions on Services Computing, Vol. 5, No. 4, pp. 512-524. [12] Sheng Di and Cho-Li Wang (2013), ‘Dynamic Optimization of Multiattribute Resource Allocation In Self-Organizing Clouds’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 3, pp. 464-478. [13] Xiangping Bu, Jia Rao, and Cheng-Zhong Xu (2013), ‘Coordinated Self-Configuration of Virtual Machines And Appliances Using A Model-Free Learning Approach’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 4, pp.681-690. [14] Xiaocheng Liu, Chen Wang, Bing Bing Zhou, Junliang Chen, Ting Yang, and Albert Y. Zomaya (2013), ‘Priority-Based Consolidation Of Parallel Workloads In The Cloud’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 9, pp. 1874-1883. [15] Ying Song, Yuzhong Sun, and Weisong Shi(2013), ‘A Two-Tiered On-Demand Resource Allocation Mechanism For VM-Based Data Centers’, IEEE Transactions on Services Computing, Vol. 6, No. 1, pp. 116-129.  ­Ã‚ ­Ã‚ ­

Sunday, January 19, 2020

bob dylan Essay -- essays research papers

Imagine: Everyday thousands of people get killed in a war no-one asked for. Friends and family are send to a horrible place with little chance you’ll ever see them again. This war, a useless and disgusting war started without any reasons and only goes on because the leaders of your country are too proud to make it end. For millions of American citizens this nightmare became truth. In 1964 the American president Johnson started sending soldiers to Vietnam. At the end of the war in 1972, it is estimated that, in total, over 2,5 million people on both sides were killed. As the war continued, the American people got more and more unsatisfied and angry at their government. They wanted the war to stop, it had been going on long enough and too many people had been killed, president Johnson, ho...

Saturday, January 11, 2020

Profit Margin and High End Segment

Cost Leadership After contemplating many different strategy options and evaluating our markets, the Ferris group decided that we would utilize and follow a strategy discussed in chapter 6 of Wheelen and Hunger’s text[1]: cost leadership. This strategy focuses on â€Å"a lower-cost competitive strategy that aims at the broad mass market and requires efficient scale facilities, cost reductions, and cost and overhead control. This strategy avoids marginal customers, and aims for cost minimization in R&D, service, sales force, and advertising. If used effectively, this strategy should reduce and control your labor and overhead costs. This would in turn decrease variable expenses and simultaneously increase your contribution margins, and ultimately your net profits. To follow this strategy, we decided to take the following actions: 1. We refrained from introducing any new products in order to prevent paying large start-up costs without efficient funding. It would have been wise to introduce a new product if we had more rounds during the simulation.This would have allowed us to specialize in the markets we were efficient in and dropped those that were costing us money. If we were to introduce a product however, to see any benefits of this initiative during the simulation, the product would have had to been launched within the first few rounds. But, spending a lot of borrowed money early on in the simulation did not make sense for our cost leadership strategy. We would have had to wait until we could fund it with our retained earnings in order to be in alignment with our strategy.However, this would not have been an option until the 3rd or 4th year, and by then much too late to see positive benefits by year 6. 2. We remained quite frugal with our allocated expenses to marketing (promotion and sales budgets) to keep our costs low and contribution margins high. 3. We decided to increase our automation for products that did not have rapidly changing market buying criteria specifications (i. e. if expectations regarding size and performance stayed fairly similar throughout the six rounds because their drift rates were small, then we increased automation for that particular line within the first year). . We attempted to use a Just In Time (JIT) strategy which meant that we tried to calculate the exact quantity each market would purchase of our products and we then produced only enough to have no more or no less on hand at the end of each forecasted year. †¢To calculate this precise forecast, in each segment we took the actual sales from the previous year and multiplied it by the market growth rate for the corresponding market segment. We then multiplied that number by a conservative (i. e. 90%) and optimistic (i. e. 10%) rate to get the respective marketing and production forecasts. †¢The only time we produced a little higher than the conservative forecast calculated using the above formula was if we stocked out of an item in the pr evious year and could then expect even higher sales the following year; essentially preventing ourselves from short-changing our forecast for the next year. If this was the case for a previous year, we would be a little more aggressive with our forecast fro the following year and used conservative and optimistic rates of around 90% and 120% respectively. . We decided to decrease the Mean Time Before Failure (MTBF) of those products (The Traditional and Low End segments) in which MTBF as a buying criteria was not very important to the customer to the minimum specification within the acceptable range to the customer (i. e. If the desired range for MTBF was 22,000 – 27,000 for a product that did not base much of their purchasing decision on MTBF, we would set the MTBF for that product at the minimum of 22,000).This was done to keep costs low by decreasing the reliability (which saves money in production costs) of those products in which customers did not care about the MTBF. Ove rall Company Performance Mistakes During the simulation, we made quite a few costly mistakes that put us in a really bad spot in comparison to the other teams. These mistakes are as follows: 1. We missed the opportunity to launch a new product because right out of the gate we were focused on the products we already had and making them all profitable.We were not willing to create a new product until we could finance the investment with our retained earnings instead of taking on debt to finance such a project. The problem was that it took us about 4 rounds to build up a cushion of cash that allowed us to feel comfortable making such an investment. Unfortunately, since it takes 2 rounds to launch a new product, we did not feel that the timing was right after round 4 because we would not have generated profits for the new product by the end f the simulation; we were unable to justify the investment for a long term project with only 2 years left in the simulation. Therefore, we did not m ove quickly enough within the first few rounds in assessing our markets as a whole and making long term investment decisions. 2. My group was also quite concerned with not increasing debt and rather building our retained earnings and collecting cash as a cushion. However, this tactic was not such a great one because it cost us points for wealth creation. We should have been using that saved cash to invest in our company, rather than hanging on to the money. . We never created any long term plans during the simulation. This was probably what hurt us the most because all we were focused on was the previous year’s results and how to make them increase. We never actually set specific goals which would have then forced us to create a detailed plan of action to help us achieve those goals; rather we were blindly just trying to be or stay profitable. 4. We continually implemented the same strategies that were not producing stellar results; especially with regards to individual segme nts.We continually tried executing the same tactics (i. e. low cost, JIT, etc†¦) without changing any details (i. e. more product development, repositioning, etc†¦) and kept hoping that things would get better. Our performance did get a little better within our underperforming segments after about 3 rounds, but not enough to push us ahead of our competition as a whole company. 5. We did not invest in automation for a few lines (Performance and Size) like we should have in the beginning.For whatever reason, a few team members believed that increasing the automation for a line that has a product with specifications that change rapidly from year to year (the High, Performance, and Size segments) was a bad idea. They were convinced that increasing the automation for these segments would be useless and that it would in fact return to where it originally started at each year end. Looking back, we should have dramatically increased the automation for these segments to keep our va riable costs low and in alignment with our strategy. . One of our biggest problems was that we kept making mistakes that cost us immensely. Some of those mistakes include: †¢Wrong Growth Rate. We used an excel spreadsheet to determine the forecasts for each segment throughout the entire simulation. However, we did not realize until we were making decisions for round 4 that the formulas were actually entered wrong into the spreadsheet and every segment was being forecasted at the Traditional segment’s growth rate rather than the actual growth rate that corresponded to each segment. Inversion of Specifications. We accidentally inverted the size and performance specifications for the High End segment during round 3. This dramatically reduced our net profit margin for this particular segment (Please see Exhibit 1). Sadly, this was originally one of our best markets and because of this mistake we missed a huge opportunity to increase our profits and perform well as a company. †¢Long Revision Dates. We did not notice until the round 4 processed that the revision date for the High End segment for round 4 was not until 2 years later.Therefore, we were unable to keep the product for this segment competitive for the remainder of the simulation; especially after our setback in round 3. In fact, this mistake dramatically decreased our contribution margin for this segment and even brought our net profit margin for the segment to a deep negative (Please see Exhibits 2 & 1 respectively). Again, we dramatically messed up one of our best selling products and were continually trying to play catch-up from our mistakes with this line; therefore, we missed a huge opportunity to increase our profits.Performance Measures To determine whether or not our company was doing well, we assessed a few areas of the Capstone Courier: 1. Contribution Margin Percentage (Please see Exhibit 3). We looked at this percentage after each round was processed to determine whether or no t it was increasing. If it was not increasing, we knew that our strategy of lowering our costs was not effective for the round in question; alerting us to lower our costs. 2. Contribution Margins (Please see Exhibit 2). We looked at the contribution margins for each segment to concentrate on each individually.Looking at whether or not the segment in question was increasing or decreasing was effective because it showed us which products were costing us the most in variable costs (i. e. materials, labor, etc†¦); showing us which segments we needed to cut costs for. 3. Net Profit (Please see Exhibit 4). This was our first indicator on the courier as to whether or not we did well in the previous round. We started off doing pretty badly but by round 3, we brought our net profits up by about $5,200 from round 1. However, the mistakes mentioned above led to dramatic decrease the following year 4 that put us in an even worse spot than we were after round 1. Luckily, we made strides to overcome those obstacles (discussed below in the Product Line Performance section) which increased our profits the following year by almost $9,500. 4. Net Profit Margins (Please see Exhibit 1). This measure was quite useful in determining how our net profits could be assessed for each segment. This told us the story of which products were profitable, which were most profitable, and which were actually costing us money to sell.Our goal for each round was to have each of the segments positive and turning a profit; which we accomplished in rounds 5 and 6, finally. Product Line Performance Errors We had many issues and made many errors with my particular line (High End – Fist) as mentioned above. During round 3, we inverted the performance and size specifications. In addition, during round 4 we did not realize that our revision date was 2 years away; this meant that my product was unable to be competitive within its segment for 3 rounds and the remaining year was spent catching u p to the competition.Once the mistakes were made, there was nothing we could do to correct our mistake. However, we did try to redirect our focus from staying competitive 100% within the High End segment with Fist, to using this product to be more competitive within the Traditional segment during round 4 while our revision date neared. To do this, we dropped the sell price from $39. 00/unit to $28. 00/unit. We did this for a couple of reasons: 1. Fist lay most closely to the Traditional product on the perceptual map.Therefore, we figured we would make the most of our mistake, which could not be undone, by trying to stay competitive on the edge of both the High and Traditional markets. 2. Luckily, the lowest price within the range for the High End segment was $28. 00/unit and the highest price within the range for the Traditional segment was $28. 00/unit as well. For this reason, we decided to sell Fist during the segments’ crisis at a price that was acceptable for both market s; this was done in hopes of picking up customers from each market since we were well aware that we would not be very competitive during round 4 within the High End segment.Statistics/Performance Below is a table to show that we were steadily climbing in our progress for Fist during the first 2 rounds and then our mistakes made this segment unprofitable during both rounds 3 and 4 (highlighted in grey) and decreased within every statistic (our customer satisfaction dropped due to the product not being competitive in the High End market, our contribution margin percentage dramatically decreased due to fewer sales/revenue, and our market share almost completely disappeared).During rounds 5 and 6, we were slowly climbing our way back to a profitable position for this segment; once we were again able to reposition Fist within the High End market we started to improve. High End Segment (Fist) Statistics Round123456 Revenue$21,615$27,099$17,301$22,253$23,470$32,026 Market Share19%20%11%6%1 2%17% Contribution Margin$7,823$9,624$4,735$4,105$6,698$9,929 Contribution %36%35%27%10%28%31% Net Margin$2,628$3,689($1,403)($1,028)$1,814$4,449 Customer Score242910111815 Functional Area Strategies and Performancelo0Due to my expertise with regards to my educational focus and previous work experience, my functional area was marketing (alongside Ashley Barnes). Unfortunately, we were not well informed about how to maximize our marketing efforts/investments (promotion and sales expenses) for the simulation until round 4. Promotion and Sales We initially remained quite frugal with our promotion and sales budgets to keep our costs low and contribution margins high in order to follow our cost leadership strategy previously.However, by investing larger amounts into sales and promotion within the first two rounds, we would have better followed our strategy. This would have been the case because we would have paid less in expenses in the later rounds since we would’ve only had to i nvest enough to maintain our accessibility and awareness percentages after the initial higher investments; essentially reaping more benefits in the later rounds of our early investments. After we learned of the formulas for producing good customer survey results however, we did quite well in certain segments.For example, we blindly allocated money to our Size segment during the first 3 rounds and slowly climbed our customer survey score. However, once we learned how to use the formulas given in the Capstone Debrief Rubric, we were able to go from a customer survey score of 16 in round 3, to a 50 in round 4, and even higher to a 57 in round 5. The formula we used came from the Capstone Debrief Rubric and stated that in order to get: †¢3 Points – The promotional budget had to lie in between $1. 4M and $2M. The Sales budget had to lie in between $2. 2M and $3M. 2 Points – The promotional budget had to lie in between $1M and $1. 4M or in between $2M and $2. 5M. The Sa les budget had to lie in between $1. 5M and $2. 2M. †¢1 Point – The promotional budget had to lie in between $. 7M and $1M or in between $2. 5M and $3M. The Sales budget had to lie in between $. 7M and $1. 5M. †¢0 Points – The promotional budget had to be lower than $. 7M or higher than $3M. The Sales budget had to be lower than $. 7M or higher than $3M. Once we started to use these formulas, we were able to allocate the right amount of funding to each segment that was appropriate.For example: if a certain segment was projected to lose money by allocating $1. 4M to the promotional budget to get the full 3 points, we would cut the budget to about $1M and still be able to get 2 points without jeopardizing our contribution margin. This is proven in the Capstone Debrief Rubric; we were allocated 3 points to our higher performing segments (Traditional, Low, and High) for rounds 4, 5, and 6 but were only granted 2 points for our lower performing segments (Performan ce and Size).In addition, we always strived to keep our size and performance specifications at exactly the current buying criteria plus the drift rates outlined on page 2 of the Industry Conditions Report. This would keep the product at what the customer expected so that they were receiving what they were asking for. Customer Buying Criteria We made it a priority to keep our prices as high as we could in each segment without disappointing our customers; this was our way of aligning our marketing strategies with our overall company strategy of cost leadership.We noted what criteria were most important to the customer to determine if we could increase our prices for each product. For example: Price was the least important buying criteria within the Size segment; meaning that these customers were not as sensitive to price changes/increases. Therefore, we were able to charge closer to the high price for the Size segment product (Fume) because this increase would not really affect the ma rket buying decisions for the Size segment; much unlike the Low End segment

Friday, January 3, 2020

Malala Yousafzai s Autobiography I Am Malala And Nigel...

Malala Yousafzai’s memoir I am Malala and Nigel Cole’s film Made in Dagenham present strong female protagonists who speak out against the injustice of patriarchal and cultural oppression. By exploring and documenting the struggle of these extraordinary individuals who find the courage to take a committed stand against the inequity they encounter, both texts powerfully illustrate that speaking out is essential to create a better world. Furthermore, they suggest that different political and social contexts can impact the possibilities of individuals suffering harm or loss when speaking out against adversity. However, those who do speak out face many physical and emotional risks in order to receive a reward. Moreover, while Made in Dagenham†¦show more content†¦In contrast, Malala’s attempt to create social change was far more dangerous. Malala and many other girls in Pakistan are denied the right to education when the Taliban seize power in the Swat Valley , Pakistan. Malala’s struggle takes place in contemporary Pakistan where speaking out is considered very dangerous. The memoir revels the destruction of Pakistan founder, Ali Jinnah’s original vision of a ‘land of tolerance’ by increasing Islamisation; two military dictatorships ; corrupt politicians, poverty, illiteracy and the rise of the ‘forces of militancy and extremism’ exemplified by the Taliban, who was led by Maulana Fazlullah and the imposition of terror and fear under the guise of sharia law. The repression of individual freedom made people fearful to speak out. The Taliban had banned women from going ‘outside without a male relative to accompany (them)’ and told people ‘stop listening to music, watching movies and dancing’. The Taliban had ‘blown up 400 schools’ and had held public whippings demonstrated the consequences of disobedience, as did the execution of ‘infidels’ like you ng dancer, Shabana, whose body was dumped in the public square. Both texts, however more so Malala than Rita reveal that speaking out in a volatile and dangerous political environment does involve more risks, but is essential for change to occur. Both texts clearly demonstrate that speaking