Practical Considerations and Implications Assignment Dissertation
Order ID 53563633773 Type Essay Writer Level Masters Style APA Sources/References 4 Perfect Number of Pages to Order 5-10 Pages Description/Paper Instructions
Practical Considerations and Implications Assignment
The practice knowledge and skills for program planning and evaluation are not static. They are dynamic, evolving, and always changing. This chapter will highlight some of these practice knowledge and know-how. Specifically, it will discuss implications of planning and evaluation for funding institutions and human service providers.
This chapter also aims to serve as a springboard for readers to start taking stock of how they can be effective and reflective practitioners. It is this ongoing self-assessment and improvement that help to improve our professional competency and service quality.
Practical Considerations and Implications for Funding Institutions
An administrator of a state human services department reflected that, when developing the Request for Proposal, she knew that it was important to explain the mission and goals of the funding opportunity, and to include a detailed description of the program application require- ments. She also required program applicants to budget for and explain how the proposed pro- grams evaluate their services. However, no details were provided in the Request for Proposal regarding the expectations and guidelines in conducting the evaluation. After reviewing many grant proposals, several programs were identified and funded. Several months into the imple- mentation of program services, the state director still wondering, “What should I expect from programs on evaluation? How should I monitor the programs’ evaluation efforts?”
When county, state, or federal departments release funds for competitive bid—or when pri- vate sources, such as foundations, provide funding for projects—program officers or program analysts are assigned to work with grantees. Program administrators and program officers of funding institutions come from a variety of backgrounds, including human services. Their knowledge and experience in program evaluation vary. They are, however, the liaisons and rep- resentatives of the funding institutions or organizations in monitoring and facilitating the ser- vice delivery efforts of programs that were granted the awards. The responsibilities of these monitors or program officers include verifying that the programs are doing what they said they were going to do (i.e., process evaluation), and assessing proper budget management. Increasingly, there is a stronger push for programs to demonstrate the differences their program services have made because of the services they offered (i.e., outcome evaluation). Program offi- cers are not only the inspectors who audit programs, but also the advocates who help programs succeed. They are also the ones who want to have monthly statistics, quarterly summaries, annu- al reports, and many other types of documentation. These efforts are time consuming, tedious, and burdensome. They are, however, one of the ways to track performance and to tell the suc- cess and the human stories behind those program activities.
119
C H A P T E R
8
Practical Grant Writing and Program Evaluation, Yuen/Terao – © 2003 Brooks/Cole
Program Documentation
Program staffs usually plan for their services while writing a proposal or reapplying for contin- ued funding. They carefully list out the details including who is going to do what, when the pro- gram will begin and end, where will it occur, and who will receive the services. Hopefully, they also make plans on how they will evaluate the program. When program officers or program ana- lysts monitor grantees, they request program staff to submit documentations that can help them determine to what extent the programs did what they said they would do. For example, program officers may look at client records that document how often did participants come to the service and what occurred when they participated. They may review roster lists or attendance sheets, they may request monthly or quarterly progress reports; or they may request other documenta- tion that would indicate that the services were conducted and the clientele was properly served. Therefore, program staff who know what they need to document or collect at the beginning of the program year, and do so while program services are conducted, will have the documentation needed when program officers/analysts request it.
As simple as this may sound, many programs do not systematically collect this information during the time when services are delivered. They wait until the end of the program year, find out what data they need, then backtrack and try to reconstruct the information. Having a process evaluation plan at the beginning of services will allow programs to collect and docu- ment accurate information as they happen or on a regular basis (e.g., monthly). Writing quar- terly and annual reports will simply be a matter of assembling the well-organized data and presenting a more vivid report of the program’s successes and concerns. We all can attest to the fact that balancing your checkbook monthly and organizing your financial information neatly will make financial management and tax return time less stressful. We can also testify to the reality that that level of organization could also be wishful thinking! Urgency of other important tasks, priority changes, and possibly, just possibly, procrastination have all been the culprits. Designated responsibility, discipline, and sufficient resources for the evaluation tasks are then the important “protective factors” toward the attainments of these important record- ing keeping and evaluation tasks.
Expectations and Resources for Evaluation
Funding institutions continue to expect programs to evaluate their program services. Most Requests for Proposals require applicants to describe how the services being proposed will be evaluated. On the other hand, many of the funding institutions do not provide enough funding for the evaluation tasks. Funding institutions should provide clearer guidance in the Request for Proposals on how they would like programs to evaluate their programs. Do they expect a pro- gram to conduct a process evaluation, or both process and outcome evaluations? Should the pro- grams hire outside evaluators, or can the programs conduct their own evaluation using a variety of evaluation approaches? Are the funds that can be allocated for evaluation sufficient to hire an outside evaluator? If not, how will the limited funds that can be allocated for evaluation be used? These are some of the questions funding institutions are encouraged to consider when request- ing program applicants to describe how they plan to conduct an evaluation of their proposed services.
Both the funding institutions and the grantees want to evaluate and improve the services they support. The constraint, however, is that the funding resources are limited. This situation
120 CHAPTER EIGHT
Practical Grant Writing and Program Evaluation, Yuen/Terao – © 2003 Brooks/Cole
is also true for some service agencies in the implementation of evaluation. Not every agency has the resources to hire an internal or external evaluator, or has the staff who has the sufficient expertise in evaluation. If this is the case, funding institutions may consider providing the nec- essary resources. These may include technical assistance or training to programs so that program staff can develop the knowledge and skills needed to conduct their own internal evaluation. This is highly recommended, especially if funding institutions expect programs to conduct an out- come evaluation or impact evaluation.
Box 8.1 describes how an independently funded federal institution provides evaluation assis- tance to a program which is expected to conduct an outcome evaluation of services.
Program Improvement and Empowerment Evaluation
How should program officers use the evaluation results to monitor programs? Many program staffs have seen evaluation as a threat to their existence. Of those program staffs who have encountered evaluation, many see evaluation as a method for determining whether or not they will receive continued funding. If their evaluation show negative results, the chances of them losing their funding is greater than if they show positive results. While this is the reality of how funding institutions use evaluation results, evaluation can also be used to support program efforts. Program officers who monitor grant recipients could use the evaluation results as tools for program improvement. They can measure the integrity of programs by determining whether or not programs conduct a quality internal evaluation, rather than judging whether or not pro- grams should continue because they obtain positive or negative evaluation results. Programs that have process and outcome evaluation plans demonstrate that they have an assessment system in place to examine and monitor service quality. Conducting these assessments will provide pro- grams with information to determine if, in fact, the services were delivered; and if so, whether they were effective or not. Programs that inform their officers that their services cannot be eval- uated often raise the doubts of whether or not the services were provided at all, and therefore whether the desired results were achieved.
Program officers and analysts can also use the program’s evaluation results as a monitoring tool, regardless of the results obtained (i.e., positive or negative), and they can use the evalu- ation results for program improvement. It is important for programs to be willing to use the evaluation results to improve their services, whether or not they obtained positive or negative results. For example, if a program obtains positive outcome results, they may want to refine the quality of their services or expand services to other populations in need. If the program obtains less than positive results, the program officer could work with program staff on iden- tifying problems, making program corrections, and monitoring progress. What program staffs do differently in light of the negative program outcomes can be the strength of the program. This is particularly true when working with new agencies, indigenous organizations, or other “developing” organizations. After all, some funding provisions are designed to support com- munity agencies for worthy projects and to serve as a springboard for further community development.
Certainly, termination of funding is an option, if a program continues to ignore its prob- lems and fails to make the proper corrections. The contracts that were based on the grant pro- posals and the evaluation data (or the lack of them) would become one of the bases for such decision.
PROGRAM PLANNING AND EVALUATION: PRACTICAL CONSIDERATIONS AND IMPLICATIONS 121
Practical Grant Writing and Program Evaluation, Yuen/Terao – © 2003 Brooks/Cole
Practical Considerations and Implications for Human Services Providers
A businessman rushed out of the airport and jumped into a taxi. He instructed the driver, “Go! Fast!” The taxi dashed out onto the highway at a high speed for about 15 minutes while the busi- nessman was making phone calls and organizing his paperwork. Suddenly, in a panic, the busi- nessman yelled out, “Where are you going?” The taxi driver replied, “How do I know, you only said go!”
How often do we human service providers get so involved in our busy daily work routine and end up losing sight of where we are heading or how we are doing? Having the right tools and equipment, such as the money to hire a driver to transport you in a taxi, does not necessarily mean you will be going to the right place. Having the needed professional education and the provisions of the right kinds of services are only part of the success formula. A well-developed program plan and evaluation, along with a well-coordinated program implementation, distin- guish success from failure.
A commercial pilot reflected on her daily routine of flying from one city to another. She said, “We start off each trip with the front wheels of a big plane resting on a 2-by-2 foot box painted on the ground at the gate. Thousands of miles later, the front wheels stop at another 2-by-2 foot box at another gate in another city. While in flight, the plane is off course more than 90% of the time in comparison to the printed flight route. Being off course is a given, due to the ever- changing weather conditions and other considerations such as air traffic. It is, however, because of the quality planning of the original flight plan and the on-going evaluation and adjustments that are made during the flight that the plane ends up where it is supposed to be—on time and on target.” How much can human service providers learn from this pilot’s experience in regard to program planning and program evaluation?
Atherton and Klemmack (1982) discuss their concerns of the future of research for social workers. They recognize various important issues including: the need for a research orientation,
122 CHAPTER EIGHT
BOX
8.1 PROJECT STAR: PROVIDING TRAINING AND EVALUATION ASSISTANCE
The Corporation for National Service (Corporation), a federal institution that allocates funds to state and local institutions/organizations for community service projects known as State and National AmeriCorps programs, expected these programs to conduct an internal outcome evaluation of their services to the community. After the first year that the Corporation instituted the State and National AmeriCorps pro- grams, it found that program staff did not have the knowledge or skills to adhere to this request, develop- ing and conducting an outcome evaluation of their services. Therefore, the Corporation entered into a cooperative agreement in 1995 with a research firm, Aguirre International, who developed an evaluation model using an empowerment evaluation approach, to provide evaluation assistance and training to State and National AmeriCorps programs so that they could develop the skills and conduct an outcome evalua- tion of services provided. The efforts of Aguirre International’s training and evaluation assistance to State and National AmeriCorps programs, known as Project STAR (Support and Training for Assessing Results), provided the programs not only with the opportunity to develop the capacity to conduct an internal evalu- ation measuring outcomes, but also, in the process of developing evaluation plans and conducting the evaluation, provided program staff with the vehicle for improving program services.
Practical Grant Writing and Program Evaluation, Yuen/Terao – © 2003 Brooks/Cole
the need for more experimentation, doing research with populations at risk, distinguishing research that serves the client or the agency, social worker or researcher, the influence of research sponsors, the need for operational definitions and specific outcome criteria, and the need for more theoretically based research. Royse, Thyer, Padgett, and Logan (2001) identify several “prag- matic issues” for program evaluation. Among them are: the political nature of evaluation, the threat of evaluation, evaluation in politically charged arenas, and cultural sensitivity issues for evaluation practice. Their comments, along with those of many writers and scholars in human services, highlight that program planning and program evaluation are both academic and profes- sional, idealistic and pragmatic, independent and political, and most of all, an art and a science.
Do Practitioners Do Planning and Evaluation?
Human service providers from different disciplines have different degrees of training, commit- ment, and familiarity of program planning and evaluation. Nevertheless, many of them consid- er themselves as service personnel and are not interested in research and planning. They believe their main goals and functions are to provide direct service to the clients. It is the job of admin- istrators to worry about planning and evaluation. There are also myths and speculative fears of planning and evaluation that require theory, logic, and statistics.
If social workers and other trained human service providers are satisfied with the notion that they are service technicians, then they may not have to carry too many of responsibilities of plan- ning and evaluation. Technicians provide the important and needed support to human services, and their contributions are valuable. Being a professional, however, involves more expectations. Among them are the establishment and use of an exclusive knowledge basis, demonstration of ongoing learning and development, and self-evaluation of one’s practice. The abilities and the practice of planning and evaluating one’s practice and performance are both the means and characteristics of professionalism. Certainly, trained professionals are interested in improving their practices and providing better services to their client populations. It is the degree and the rigor of that involvement that separate them on the continuum of professionalism.
The Needs for Informed Planning and the Inclusion of Explanatory Studies
There are needs for identifying contributing factors and using them effectively to formulate appropriate intervention plans to address the identified problems. Exploratory and descriptive evaluation, particularly in regards to qualitative studies, provide the human faces and tell the real-life stories of the issues being studied. They provide the important process data and per- sonal accounts that make the program planning more appropriate to the needs of the clients. They do, however, have limited power in making generalizations, and restricted confidence in making prediction and assurance of producing expected outcomes.
It is often difficult for human service providers to control their planning and evaluation envi- ronments and to incorporate very important ethical considerations. But it is still possible, and desirable, to incorporate experimental elements into the program planning and evaluation process. Use of waiting lists, interruptions due to vocation breaks such as summer holidays in schools, and staggering of service delivery schedules are among some of these methods. Single- subject designs and quasiexperimental designs are other valuable alternatives that can produce reliable data that have more explanatory power.
PROGRAM PLANNING AND EVALUATION: PRACTICAL CONSIDERATIONS AND IMPLICATIONS 123
Practical Grant Writing and Program Evaluation, Yuen/Terao – © 2003 Brooks/Cole
Diversity, Social Justice, and At-Risk Populations
The United States is a multicultural society, and human service professionals should take into consideration the existence of different cultural beliefs and practices. There is no “one size fits all,” generic, and all-purpose client assessment and intervention approach that applies to all cul- tures. Interventions have to be refined to fit the cultural context of the particular client. Clients’ perceptions and interpretations of events and issues—along with other personal, social, eco- nomic, and environmental factors—form the person-in-environment context for the formation and implementation of professional interventions. Diversity among people and cultures calls for practitioners to formulate differential and culturally appropriate interventions in working with clients of different backgrounds.
Human service professionals work with the most vulnerable populations in the society. Many of them are dealing with violence, abuse, disability, and other adverse life conditions with limit- ed resources. Many clients come to human services at the time that their life conditions have gone beyond their normal means of coping. Minority populations face additional cultural and discrimination problems that escape the mainstream populations. Being sensitive to the diversi- ty, social justice, and special considerations in working with at-risk populations, therefore, has to be an integral part of all program planning and evaluation processes. DuBois and Miley (1996) define social justice as “the social condition that enables all members of a society to share equal- ly in the rights and opportunities afforded by society and in the responsibilities and obligations incurred by their membership in society” (pp. 56–57). Often, particular individuals or groups are denied of their opportunities to participate. According to DuBois and Miley, “full participation in society means that individuals have access to the social benefits of society in order to realize their own life aspirations, and, in turn, that they contribute to social well-being” (p. 57).
While advocating the uses of the strength perspective, one should also recognize the reality of problems and weakness within individuals and social systems. Among these deficiencies are the unjust social systems and human networks that deny people full participation and opportu- nities for betterment and actualization. Advocacy and empowerment should be inherently part of the socially responsible program planning and program evaluation.
Similar to the preventive medical health approach, an increasing number of human service professionals are focusing on detecting and addressing risk factors that may negatively affect the quality of life of clients. Risk factors relate to the individual, the family, and their environment. Children who live in families or communities where cigarette smoking is prevalent are at high- er risk for the possible health damages caused by secondhand smoke. Similarly, children who live in high-crime areas have increased likelihood to be victims of crime and violence. The fed- eral Center for Substance Abuse Prevention identifies risk factors in six different domains: indi- viduals, peers, family, school, community, and society.
As we advocate for the use of the logic model, it represents the efforts of greater degree of standardization and the general intentions of scientific approach: study, predict, and control. As the main decision makers in the traditional social structures, the male, and particularly white male, perspectives have heavily influenced the practices of planning and evaluation. Consequently, they have set the groundwork as well as the so-called standards. With the increase of diversity in this country and the recognition of the values of diversity, there have been stronger emphases on developing program and evaluations that are inclusive and culturally com- petent. There are many ways to increase the diversity and competency of the planning and eval- uation processes, for example:
124 CHAPTER EIGHT
Practical Grant Writing and Program Evaluation, Yuen/Terao – © 2003 Brooks/Cole
- Employ planners and evaluators of diverse backgrounds. ● Involve the diverse target populations in the development and implementation of pro-
grams and evaluations. As we discussed earlier, empowerment is the key to respect and support diversity.
- Be sensitive to the cultural relevance of any program plan and evaluation tasks including service modality, activities, research designs, and instruments.
- Use bias free language and literatures. ● Utilize qualitative and inductive planning and evaluation approaches such as ethnographic
studies. ● Maintain open-mindedness in data collection, planning, and data analysis. ● Allow culturally appropriate alterative interpretations as valid options.
Who Is the Boss?
Program planning and program evaluation are conducted for the benefits of the service recipi- ents and that eventually lead to the well being of the community, agency, and society. The clients and their needs are the bosses and the rationale that give sanction to the program planning and evaluation processes. Human service needs that match the agency missions are the impetuses that drive the planning process, which in turn produces programs with specific interventions and activities. Program activities then drive evaluation. In reality, there may be situations in which agency programs are driven by funding rather than needs. There may be some good reasons behind a traditional mental health agency that suddenly becomes interested in getting funding to run a drunk-driver education program. However, there is also the reality of nonprofit agency’s funding shortage and the increased demands for new services.
As we argue earlier, practitioners, for their knowledge and expertise in service delivery and the clients that they serve, should engage in the planning and evaluation processes. Their involvement also serves the purpose for practice evaluation that should lead to the improvement of service quality.
Due to the funding and the nature of a program, it may have an external evaluator, an inter- nal evaluator, or a program staff and usually the program director to serve as an evaluator. There are advantages and disadvantages for having an internal or external evaluator. Nevertheless, one should bear in mind that there should be activity-driven evaluation, not evaluation-driven activ- ities. This concern is particularly true when the outside evaluator has a higher education and apparently has more evaluation expertise.
For example, a new university professor is invited to become the program evaluator. He is interested in evaluating the program. He also wants to maintain a particular academic rigor in his evaluation design to produce fine data that he may, one day, use for possible publication. After all, he is doing this partially for meeting his community service requirements as well as scholarship activities for tenure and promotion. If there are articles coming out of these program evaluation tasks, he has met some “publish or perish” demands. Rightfully, he plans on maxi- mizing the output of his involvement. In order to produce the academic quality data set, the pro- gram staff and participants may have to alter the program and other activities in order to produce the needed responses or data set. A simple and functional post–event debriefing may be replaced by a pre- and postexperimental design with standardized written instruments, which would first be involved in a pilot testing on a sample of the target population. Number of participants and their hours of participation may also need to be doubled in order to achieve
PROGRAM PLANNING AND EVALUATION: PRACTICAL CONSIDERATIONS AND IMPLICATIONS 125
Practical Grant Writing and Program Evaluation, Yuen/Terao – © 2003 Brooks/Cole
sufficient cases for statistical analysis and enough hours or dosages to produce sufficient range of outcomes. Additionally, a comparison, and preferably a control group, could be established in a sister program.
By inviting this enthusiastic program evaluator, the program will need to make many pro- gram adjustments to accommodate the evaluation demands. At the end, the program activities are driven by the evaluation. No doubt the evaluation findings will be of great quality. In fact, if resources allow, a rigorous evaluation is always preferable. Furthermore, it is somehow in line with our support in the earlier chapters for the use of experimental designs. However, in most situations, the efforts and resources that pour into this evaluation approach may deprive the pro- gram activities, and ultimately the clients that they intend to serve. Program evaluation should be driven by activities; there should be a balance and mutual support.
Theory-Based, Objective-Guided, and Balanced Planning and Evaluation
Theory-based planning and evaluation, organized and measured by operationalized program objectives, provide the needed comprehensiveness and balance. As we proposed in Chapter 3, each program has its own philosophy or working hypothesis that is based on selected theories and practice experience. Similar to the program planning process, the evaluation process should assess the program’s theory bases and use them to guide the development of evaluation.
There are process objectives, outcome objectives, and impact objectives. Program objectives should have a good mix of these three levels of objectives. Process objectives produce the need- ed data for reporting and other planning purposes. Outcome objectives detail the expected results and outcomes. Impact objectives extend beyond immediate gains and reach for the long- term effects. A program plan should include a balanced set of objectives that can demonstrate results at all three levels.
KISS and Develop a Buy-In
Several years ago, one of the authors (Yuen), went to an elementary school in Kansas City, Missouri, to consult on a Senior Corps program. Retired and other seniors citizens spend every day in school helping first through third grade students develop reading and study skills. It was explained to the volunteers that the principle used for planning and evaluation was KISS. The acronym stands for “Keep It Simple Stupid.” Immediately, an African American woman in her 70s stood up and said, “Sir, I would appreciate it if you don’t use the word ‘stupid’ in front of my children. How about ‘sweet’?” Thanks to that nice woman, for this book KISS now stands for “Keep it Simple and Sweet.” Yes, the principle is to keep the program planning and evaluation tasks simple, user friendly, practical, and easier to understand. Making these perceived compli- cated planning and evaluation tasks easy to use and easy to get at will increase the chances for them to be accepted and utilized. This acceptance that becomes the “buy-in” is one of the most important elements for any program planning and evaluation process. When people are involved in the process, they develop an understanding of the program and feel that they are valued. For sure, their levels of involvement increase. Although this may seem to be common knowledge, many human service providers fail to recognize its importance.
Resource Issues
The following issues are resource-related recommendations that could lead to improved incor- poration of planning and evaluation:
126 CHAPTER EIGHT
Practical Grant Writing and Program Evaluation, Yuen/Terao – © 2003 Brooks/Cole
- Program planning and program evaluation are the integral parts of the equation of success, and should be in place from the beginning of any program. They should not be an after- thought when writing the end-of-the-year report or the continuation grant application.
- The program manager should set aside regular time for staff and volunteers to complete planning and evaluation functions. They are part of the regular workload, not an extra bur- den to be conducted only after all the other tasks have been completed.
- Similarly, there should be a sufficient budget for program evaluation. If evaluation is val- ued, it is to be done in a businesslike, professional manner.
- Due to the high staff turnover rate for certain service programs, it is a good idea to have a written program plan and evaluation plan to ensure consistency.
- Though often ignored, human resources is one of the most valuable resources that human service agencies could have. Trained and experience staff, staff who contribute, staff who work together, staff who commit to the philosophy of the program and have the expertise in service delivery are resources that make a program successful. They are capitals of the agency. The recruitment, retention, development, and management of quality staff demand the attention and support from the host organizations and the funding sources.
Summary
If you don’t know where you are going, how do you know if you are going in the right direction? If you are going in the right direction, how will you know when you have arrived? Program plan- ning and evaluation help set the course and destination.
Some people say they enjoy the journey; some say they like the destination. The truth is without either one, there is neither one! Program planning and evaluation are integral parts of any successful program, and are learned skills. Doing them well is something that needs prac- tice. As Thomas Edison once said, “Genius is one percent inspiration and ninety nine percent perspiration.” The best way to get the practice and the grant funding for service is to start doing one today! Best of luck!
References Atherton, C., & Klemmack, D. (1982). Research methods in social work. Lexington: MA: D.C.
Heath. DuBois, B., & Miley, K. (1996). Social work: An empowering profession. Boston: Allyn &
Bacon. Royse, D., Thyer, B., Padgett, D., & Logan, T. (2001). Program evaluation: An introduction
(3rd ed.). Belmont, CA: Brooks/Cole.
PROGRAM PLANNING AND EVALUATION: PRACTICAL CONSIDERATIONS AND IMPLICATIONS 127
Practical Grant Writing and Program Evaluation, Yuen/Terao – © 2003 Brooks/Cole
- In the last twenty four hours, I took two glasses of water immediately I woke up, white tea and bread (with margarine) for breakfast, coffee at ten, took mutton, rice, cabbages and beans for lunch. At four, I took pudding, milk and some fruit salad then took an omelette and African beef stew for supper.
This meal according to mypyramid.gov contains all the five main food groups. Cabbages are vegetables, the mutton and beef stew contain proteins, the fruit salad represents fruits, beans are grains, while the milk consumed is a dairy product and the margarine contains oils.
- The calorie level of this twenty four hour intake was the level of a moderately active person. For breakfast, the serving was a bit less for a person my gender and age since am moderately active hence I need a heavier breakfast. Lunch and supper were okay and enough to give me the correct amount of energy I need.
- This diet requires some substitutions. I need to increase the amount if fibre intake, for example I can substitute the milk for avocados and take more cabbages. I can also eat more mangoes instead of food salad. I need to reduce the salt intake significantly and take salt just enough for me. I should take chicken and fish instead of red meat
- Definitely, I need to improve my diet; I should take white meat example fish and chicken instead of red meat (mutton and beef), increase the water intake to about two litres a day, and take more fruits. I should start having heavier breakfasts and lighter supper to avoid unnecessary weight gains. I should also increase the vegetables I take.
References
http://www.choosemyplate.gov/dietary-guidelines.html
Whitney, E. & Rolfes, S. (2013). Understanding nutrition. Australia Belmont, CA: Wadsworth, Cengage Learning.
Practical Considerations and Implications Assignment Dissertation
RUBRIC
QUALITY OF RESPONSE NO RESPONSE POOR / UNSATISFACTORY SATISFACTORY GOOD EXCELLENT Content (worth a maximum of 50% of the total points) Zero points: Student failed to submit the final paper. 20 points out of 50: The essay illustrates poor understanding of the relevant material by failing to address or incorrectly addressing the relevant content; failing to identify or inaccurately explaining/defining key concepts/ideas; ignoring or incorrectly explaining key points/claims and the reasoning behind them; and/or incorrectly or inappropriately using terminology; and elements of the response are lacking. 30 points out of 50: The essay illustrates a rudimentary understanding of the relevant material by mentioning but not full explaining the relevant content; identifying some of the key concepts/ideas though failing to fully or accurately explain many of them; using terminology, though sometimes inaccurately or inappropriately; and/or incorporating some key claims/points but failing to explain the reasoning behind them or doing so inaccurately. Elements of the required response may also be lacking. 40 points out of 50: The essay illustrates solid understanding of the relevant material by correctly addressing most of the relevant content; identifying and explaining most of the key concepts/ideas; using correct terminology; explaining the reasoning behind most of the key points/claims; and/or where necessary or useful, substantiating some points with accurate examples. The answer is complete. 50 points: The essay illustrates exemplary understanding of the relevant material by thoroughly and correctly addressing the relevant content; identifying and explaining all of the key concepts/ideas; using correct terminology explaining the reasoning behind key points/claims and substantiating, as necessary/useful, points with several accurate and illuminating examples. No aspects of the required answer are missing. Use of Sources (worth a maximum of 20% of the total points). Zero points: Student failed to include citations and/or references. Or the student failed to submit a final paper. 5 out 20 points: Sources are seldom cited to support statements and/or format of citations are not recognizable as APA 6th Edition format. There are major errors in the formation of the references and citations. And/or there is a major reliance on highly questionable. The Student fails to provide an adequate synthesis of research collected for the paper. 10 out 20 points: References to scholarly sources are occasionally given; many statements seem unsubstantiated. Frequent errors in APA 6th Edition format, leaving the reader confused about the source of the information. There are significant errors of the formation in the references and citations. And/or there is a significant use of highly questionable sources. 15 out 20 points: Credible Scholarly sources are used effectively support claims and are, for the most part, clear and fairly represented. APA 6th Edition is used with only a few minor errors. There are minor errors in reference and/or citations. And/or there is some use of questionable sources. 20 points: Credible scholarly sources are used to give compelling evidence to support claims and are clearly and fairly represented. APA 6th Edition format is used accurately and consistently. The student uses above the maximum required references in the development of the assignment. Grammar (worth maximum of 20% of total points) Zero points: Student failed to submit the final paper. 5 points out of 20: The paper does not communicate ideas/points clearly due to inappropriate use of terminology and vague language; thoughts and sentences are disjointed or incomprehensible; organization lacking; and/or numerous grammatical, spelling/punctuation errors 10 points out 20: The paper is often unclear and difficult to follow due to some inappropriate terminology and/or vague language; ideas may be fragmented, wandering and/or repetitive; poor organization; and/or some grammatical, spelling, punctuation errors 15 points out of 20: The paper is mostly clear as a result of appropriate use of terminology and minimal vagueness; no tangents and no repetition; fairly good organization; almost perfect grammar, spelling, punctuation, and word usage. 20 points: The paper is clear, concise, and a pleasure to read as a result of appropriate and precise use of terminology; total coherence of thoughts and presentation and logical organization; and the essay is error free. Structure of the Paper (worth 10% of total points) Zero points: Student failed to submit the final paper. 3 points out of 10: Student needs to develop better formatting skills. The paper omits significant structural elements required for and APA 6th edition paper. Formatting of the paper has major flaws. The paper does not conform to APA 6th edition requirements whatsoever. 5 points out of 10: Appearance of final paper demonstrates the student’s limited ability to format the paper. There are significant errors in formatting and/or the total omission of major components of an APA 6th edition paper. They can include the omission of the cover page, abstract, and page numbers. Additionally the page has major formatting issues with spacing or paragraph formation. Font size might not conform to size requirements. The student also significantly writes too large or too short of and paper 7 points out of 10: Research paper presents an above-average use of formatting skills. The paper has slight errors within the paper. This can include small errors or omissions with the cover page, abstract, page number, and headers. There could be also slight formatting issues with the document spacing or the font Additionally the paper might slightly exceed or undershoot the specific number of required written pages for the assignment. 10 points: Student provides a high-caliber, formatted paper. This includes an APA 6th edition cover page, abstract, page number, headers and is double spaced in 12’ Times Roman Font. Additionally, the paper conforms to the specific number of required written pages and neither goes over or under the specified length of the paper.
GET THIS PROJECT NOW BY CLICKING ON THIS LINK TO PLACE THE ORDER
CLICK ON THE LINK HERE: https://phdwriters.us/orders/ordernow
Also, you can place the order at www.collegepaper.us/orders/ordernow / www.phdwriters.us/orders/ordernow
Do You Have Any Other Essay/Assignment/Class Project/Homework Related to this? Click Here Now [CLICK ME] and Have It Done by Our PhD Qualified Writers!!