Government program review process




















This meant many more organizations needed funding help. But the actual volume of submitted applications went down. Restrictions made planning and piloting in-person, direct-service programs difficult.

On the bright side, saw much more support of public safety initiatives from community and corporate foundations. There was also flexibility in existing grant programs that allowed for budget modification to focus on emerging needs. On the law enforcement side, the Department of Justice ushered in a new online grants portal to streamline funding at the federal level. The conversion of existing systems did not go as smoothly as end-users would have expected.

Technical issues caused delays in both application submission and award announcements. Mental health continues to be receive a lot of focus—both the mental wellness of first responders as well as how law enforcement can improve response to citizens dealing with mental health issues or emotional crises.

We are beginning to see creative methods from all first responders police, fire service, and EMS in how to integrate mental health into different aspects of their jobs. This is a critical area to watch for grant funding, as it could start to involve medical and healthcare grants instead of just criminal justice grants. These same funders are also moving away from only funding existing grantees and back to open solicitations—which is great news for new applicants. The biggest mistake we saw was a focus on wants vs.

Grantors want to see there is a need and that the applicant is not looking to get money for just anything. Under the direction of the Office of Management and Budget OMB , the process of rating the performance of all agency programs seek to be comprehensive in its coverage so that agencies are rated on a scorecard that is comparative across the federal government. The process is based on agency scores on criteria established by the OMB but after much consultation inside and outside government.

The agency scores are calculated based on evidence from the agencies that comes from credible sources, including program evaluations and external audits. The agencies are also rated on their capacity to provide evidence on their own performance.

The more independent the evidence, of course, the higher the rating is on this score. What is most instructive from the American experience is the recognition that program evaluation is the best way to ascertain and assess program effectiveness in securing desired impacts, results and outcomes.

However, the bottom line is that if government decision-makers want to strengthen the performance of government they must pay attention to program effectiveness. Program evaluation, accordingly, must be undertaken as a core function of governance and public management [14]. Two lessons from these comparative experiences are significant to this discussion. The evolution, moreover, is not necessarily a continuous record of progress. Commitment to the use of evidence is not always what it needs to be, at the levels of ministers or senior officials or both; competence in understanding the need for and yet limitations of effectiveness evidence is missing in some cases; and, the processes for using evidence can be or become deficient in not embedding the use of evidence in the decision-making process.

Second, these experiences, taken together, illustrate that the use of evidence on effectiveness and performance in government decision-making is a necessary condition of good governance and public management. Political responsiveness and fiscal discipline are necessary but they are not sufficient. All three elements must be present. The point is that government must have programs that have their desired effect.

Continuous efforts to ascertain and assess program effectiveness are therefore incumbent on government decision-makers and managers. Building on Experience For the Canadian government to build on what it has accomplished to date and to learn from its own experience as well as from international experience, several principles to govern the decision-making process must be accepted.

Recognizing Program Evaluation as a Core Function of Public Management Program evaluation is a core function because it seeks to ascertain and assess the effectiveness of government programs in achieving desired results, impacts and outcomes.

The purpose and business of government is to provide programs of various kinds to achieve desired results, that is, to be effective. Governments should be responsive to citizens and their priorities, including especially those of the major stakeholders of the various programs government provides. But a government is not fully or sufficiently responsive, if the programs that it provides, in order to be responsive, are not effective, or not as effective as they could be, within whatever restraints government decision-makers must operate.

Governments should also be responsible in the spending of public monies; ministers and officials must exercise fiscal discipline. But if programs are not effective in achieving results, it matters for nothing that they may be affordable, economical or efficiently managed. Having a budgetary surplus or a balanced budget does not mean that every program provided by government provides value for the money spent.

An ineffective program is a waste of public money. Results-based management, except insofar as it fully incorporates program evaluation, is no substitute for program evaluation, however useful it may be for management control and improvement. Performance measurement regimes do not seek to ascertain or assess program effectiveness. Rather they seek to determine the extent to which departments achieve results or outcomes. They measure achievement against targets.

They do not attempt to explain or account for the performance in question, let alone the effectiveness of their programs. In the case of some programs, performance measurement may be all that is required, especially when the outcome is nothing more than the delivery of an output a good or service.

In other cases, it may be all that is possible or feasible, as when it is clear an evaluation would be methodologically impractical or too costly. At a time when there is more than a little confusion over the numerous "initiatives" undertaken to improve management performance, financial control and public accountability, it is important that the core function of program evaluation be clearly understood by politicians and public servants.

Program evaluation is not just another initiative; it is a core function of governance and management. It is not an optional method or technique. Good governance and public management require on-going program evaluation. Embedding Program Evaluation in the Decision-Making Process Even though program evaluation is a core function of governance and public management, it stills need to be embedded in the decision-making process.

Program evaluation is one of those functions, along with planning, coordinating and reporting, that can be ignored by decision-makers who are inattentive to the requirements of good governance and public management. A critical test of the quality of a government's decision-making process, accordingly, is whether there are requirements to ensure that evidence on program effectiveness is brought to bear in decision-making. Embedding program evaluation in decision-making requires that there be a corporate or whole-of-government requirement that program evaluations be conducted and that the evidence from them be used in decision-making.

Letting departmental managers decide whether or not to do program evaluations, under a philosophy of management devolution, ignores the fact that government decision-making, including government budgeting, at some point becomes more than a departmental responsibility; it becomes a corporate or whole-of-government responsibility. Departments are not independent of government as a whole or, for that matter, of each other.

They have corporate and, increasingly, horizontal responsibilities where decision-making must involve central agencies and other departments. In recognition of the particular circumstances and undertakings of different departments, departments may be given some flexibility or discretion in regard to the coverage, timing and types of the program evaluations that they use.

But, in principle, there should be no exceptions to the requirement for evidence on program effectiveness in government decision-making. Requiring all departments to conform to the requirements of this core function is not a step backwards in public management reform. Public management reform over the past twenty-five years has always and everywhere assumed that some dimensions of governance have to be conducted at the centre and according to corporate principles.

Even those who most favour decentralization, deregulation and devolution recognize that there must be some "steering" from and by the centre. The idea that the centre should not prescribe at all represents a misreading of the credible public management reform models as well as a failure to acknowledge the legitimate demands of Parliament and citizens for a central government structure that directs, coordinates and controls the various departments of government.

Linking Program Evaluation to Budgeting and Expenditure Management Embedding program evaluation in the decision-making process means that evidence on program effectiveness should be brought to bear in government decision-making. In particular, it should be brought to bear in budgeting and expenditure management, since this is the place where evidence about program effectiveness is most likely to have the greatest use.

First, evidence on effectiveness can help to decide on priorities in terms of resource allocation, including reallocation. Second, it can help to decide on changes to existing budgets, where the evidence suggests that changes are required, including, but not only, incremental adjustments upwards.

It follows, of course, that the budgeting and expenditure management process must incorporate a 'budget office' function that constitutes the primary centre for the consideration of evidence from program evaluations: first, as a source of advice into the policy and resource allocation processes; and, second, as the main decision-making centre for making changes to the existing budgets of programs.

The budget office function requires that there be discretionary funds available for enriching programs, as necessary based on evidence, but also the discretion to alter program budgets as necessary.

In practice, this means that there should always be a budget reserve in order that the expenditure management function can be performed without having to resort to a review and reallocate exercise.

These two dimensions are distinct activities but need to be closely interrelated in practice since each is likely to be best performed by a single set of central agency officials who are as fully informed as possible of the programs in a given department or area of government. It also justifies the TBS assessing the extent to which the evidence on effectiveness meets the standards of quality expected of departments, even allowing for variations, given the nature of each department's programs.

Securing Independence for Program Evaluation Program evaluation is a core function that requires a degree of independence from those directly responsible for the programs being evaluated. The function is one that, like internal audit, must be undertaken by those at least one step removed from program management.

There is a tradeoff, however, in securing the independence of program evaluation: the more external, the greater the independence; the more internal, the greater the ownership of the findings. There is no easy resolution. Placing responsibility for program evaluation at the departmental level means that deputy ministers decide on the importance of the function in departmental decision-making.

This runs the risk that deputy not see the function as critical to their agenda. It also runs the risk that the deputy devolves responsibility to the department's functional specialists in program evaluation and thereby pins primary responsibility for the use of evidence down the line to program managers. When this happens, program evaluators invariably adjust what they do to focus on being helpful to program managers. This is understandable. But it invariably diminishes the extent to which the program evaluations themselves are relevant to larger purposes of government decision-making, including expenditure management.

Program evaluation should not focus primarily on assisting managers at the level of program management in improving program management or delivery. Performance measurement systems are better at helping in these regards. The value of program evaluation is best suited to raising demanding questions about program effectiveness for senior departmental officials.

To do so, they need to be independent of program managers. And, they should do so in ways that also provides evaluations with which central agency officials can challenge the claims of senior departmental officials in the government decision-making process. Enhancing Quality in Program Evaluation Program evaluations are demanding exercises not only to undertake but to use.

Decision-makers need assurance that the evidence presented by program evaluations is credible. In short, they need to be assured that the program evaluations they use are of the highest quality.

The quality of program evaluations is due to the quality of the staff who carry out this function, the resources devoted to it, and the extent to which the functional community is developed and maintained as a professional public service community. None of this will happen unless there is the demand for quality from senior officials or ministers. The quality of program evaluation will also be enhanced to the degree that the decision-making system allocates resources to priorities that are linked to actual programs or new program designs.

Linking priorities to actual or proposed programs is clearly the hardest part of government decision-making since program effectiveness evidence is seldom definitive. But absent evidence on program effectiveness priorities will be established merely on the basis of preferences. That is rarely good enough for achieving value for money. The agenda will specify the deliverables and the discussion's time frame.

During the face-to-face interview, the PRL will identify issues and lead the discussion. If findings exist, the PRL will document those findings and ask questions to clarify the underlying issues. The Review Team will inform the Project Manager of the findings to verify issues.

The Project Manager will have one business day to clarify the findings. The intent of this process is to ensure that issues are clearly stated; it is not designed as a forum for negotiation, discussion, or iteration. After the review has been completed and the Review Team has declared their findings to the Project Manager for clarification, they will create their report. The report will make a determination of Red, Yellow, or Green for each project reviewed.

The Analyst will distribute the Project Review Report. A copy will also available in the Review archives five days after the review is completed. The Review Team initiates the review process by scheduling reviews with the Project Manager. It is important to schedule the review so that it can be conducted in a manner that is not disruptive to the project itself.

This is not always possible; however, the disruption a review will cause should be taken into account. The Project Review process is part of the PM methodology and as such is an integral component of effective project management. A Project Review Report will be generated from the project review process. Based on these findings, the project will be categorized as Red, Yellow, or Green. The project's status will indicate whether the project complies with project management standards.

A second review will be scheduled for all projects. How soon and to what depth will be determined by the findings. An action plan to address risk exposures must be put in place for Yellow and Green projects. Thus, the project review process helps to identify those areas that require attention to make the project team and the project manager successful.

The project reviews will be conducted in three phases that are described below in detail. The Project Review team compiles an Entry Packet, which contains materials to help the PM understand the Project Review Process and that which the Project Manager needs to make available for the project review. Subsequent meetings will be scheduled with the Project Manager and project team members.

Project Managers should ensure that documents are stored in their designated areas. The Project Manager will identify the Project's participants and their respective roles. During the Document Review, documents are retrieved for analysis to ensure that required project deliverables are in their designated location. Steps conducted in the research phase include:.

The deliverables and documentation are analyzed, and then compared against the approved Project Methodology. The review will last between one and two business days, depending on the size, scope, and complexity of the project. The Analyst will consolidate notes and create a preliminary set of findings and issues. If practical at this time, recommendations for preliminary findings can also be documented.

This briefing is to ensure that there is no miscommunication between the Project Manager and the Review Team. The Review Team will formalize the information collected during the pre-review analysis, review, and post-review findings. Recommendations will be made to the Project Manager.

The Project Manager then implements the recommend changes. A copy will then be stored in the Review archives. There are four types of reviews conducted during the course of a project.

The reviews are to be conducted after control points are triggered. The first project review will determine if the project meets the Requirement defined as necessary to initiate a project.

The completion review will determine that the project met all requirements, sign-offs, and deliverables defined by the project scope and that the project management process met all required standards. The Completion review will determine that all project technical, financial and contract closure events have been completed correctly. Special reviews will determine that the project meets all requirements, sign-offs, and deliverables defined by the project scope to be completed at the point in time the special review is conducted.

It will determine that all project management processes are in place and meet all required standards. The special review will determine that all project technical, financial and client closure events due at the time of the review have been completed correctly. A Stakeholder Requested Review will be conducted to examine the identified issue. The format of this review is similar to the standard Project Review. The stage in the project's life cycle will determine what deliverables will be reviewed.

The types of reviews and their timing are determined by the project's attributes. The three project attributes are 1 the testing or non-testing attribute, 2 project phase, and 3 criticality. These three attributes determine the project's status. The project status determines the timing and composition of project reviews.

The chart above identifies the scheduled reviews.



0コメント

  • 1000 / 1000