Oswald Siku Mughongora
No one can deny the fact that Namibia has over the years introduce several interventions that on paper appeared “groundbreaking” and “people centred”, however, their implementation and results thereof are far from the word “groundbreaking”.
Many corners will argue that most of such initiatives have been complete failures. Let us take the Mass Housing Development Programme (MHDP) and the Targeted Intervention Programme for Employment and Economic Growth (TIPEEG) for instance and discuss why and how having a working Monitoring and Evaluation system could have yielded different results? The Mass Housing Development Programme (MHDP) launched in 2013 aimed at increasing the supply of affordable housing towards meeting its demand in the country. This was considered “groundbreaking and people centric” as it promised to address the country’s critical housing crisis by building 148 000 houses by 2030. At the time, statistics indicated that, Namibia’s housing backlog was around 300 000 and 40% of Namibians lived in informal settlements.
Talk about the magnitude of impact it was going to have! On paper, it seemed we were headed in the right direction, until the project was abruptly stopped in 2015, with only 4 130 houses completed (and occupied) and 891 unoccupied! The question is why the abrupt ending? An interview with the Prime Minister, madam Saara Kuugongelwa-Amadhila blamed legal disputes and underperformance of contractors (New Era live, 2022). The above clearly shows the lack of accountability and quite frankly, there was none required on the part of government to these contractors. This is where monitoring and evaluation comes in!
Let us take another example to help further qualify and quantify why monitoring and evaluation can serve as economic buffer and recovery tool. In 2011, Namibia introduced the Targeted Intervention Programme for Employment and Economic
Growth (TIPEEG) with the objective of creating 104 000 direct and indirect jobs between 2011 and
2014 through a N$14 billion budget. Good intentions rights? At its closure, TIPEEG, just like the MHDP came very short of its goal. The then minster of
finance, Calle Schlettwein, was quoted in The Namibian newspaper (2018) saying the following, “I don’t regret spending TIPEEG funds, but I wish TIPEEG had better returns. I wanted much better productive capacity out of that huge amount we spent.”
Despite the huge budget, TIPEEG has been widely criticised for creating mainly short-term jobs. (Unconfirmed) reports show that in the three years of its implementation, the programme only created 83,000 jobs, of which only 15,829 were permanent, representing a mere 15% success rate (in terms of permanent employment). The question is, what can be done to ensure successful implementation of such developmental interventions and enhance accountability on the part of these contracted service providers? The answer lies in having a stringent monitoring and evaluation system (and framework) accompanying implementation. What is then this monitoring and evaluation and how can it be implemented in these interventions? Monitoring and evaluation are management tools that help in keeping a control on project/programme activities as well as raising the level of performance. OECD/DAC
defines monitoring as the routine collection and analysis of information to track progress against set plans and check compliance to established standards. It helps identify trends and patterns and inform decisions for project/programme management. Evaluation on the other hand is an assessment (systematic and objective) of an ongoing or completed project/ programme or policy, its design, implementation, and results. Thus, evaluation gauges the success of a project or programme in meeting stated objectives. Monitoring and evaluation data can be used in many ways. The three most common are learning, accountability and project/ programme management. Practically, it is important to develop a Monitoring and Evaluation Framework at project/ programme planning stage that clearly articulates
all activities, the results (or deliverables) expected from these activities (output, outcome and impact level), define all the indicators at each result level, determine the baselines and targets for each indicator, outline how data will be collected (data sources), how frequent these data will be collected, as well as the personnel responsible for collecting data and reporting.
This should be done at both the interventional level and beneficiary project level (if the intervention has several implementation levels). Thus, project management will be accompanied by a well aligned monitoring and evaluation framework for ease of tracking not only project activities but also expected results, through pre-determined and defined data collection methods and analysis. Sounds very theoretical right? I know! You might be asking how is this then implemented practically? At interventional level, let us use the Mass Housing Development Programme for illustration purposes, the following steps ought to be undertaken. (i) Develop a Monitoring and Evaluation Framework to accompany project planning documents, such as Theory of Change and implementation plans. This overall Monitoring and Evaluation Framework sets out the goal and scope of the programme. (ii), as in the above example which has subsequent beneficiaries, in this case, service providers who are contracted to perform certain tasks, it is important to ensure that a Monitoring and Evaluation Framework developed collaboratively with the client (in this case
government or rather a responsible government entity) and the service providers (contractors) forms part of the service agreement. This framework
outlines all the expected activities to be executed and the results thereof. This way, the service provider is legally bound to the content articulated in the framework, thus promoting accountability. (iii) Develop an indicators’ tracking sheet, and then (iv). Track the indicators as per the pre-determined frequency outlined in the framework, considering the targets. The result is that any deviations and variations are picked up as early as possible and corrected. Additionally, financial disbursements are attached to activities’ implementation and milestones’ achievement, thus non-conformity cannot drag on with financial costs on the client. Monitoring and evaluation should therefore be conducted throughout the life cycle of a project/ programme, or even policy, including after completion. Continuing streams of M&E data and feedback add value at every stage, from design through implementation and close-out.
Here are the common types of monitoring that can be applicable in the Namibian context to enhance transparency and accountability:
• Process (activity) monitoring tracks the use of inputs and resources, the progress of activities and the delivery of outputs. It examines how activities are delivered
• Results monitoring tracks outcomes and impacts.
• Financial monitoring accounts for costs by input and activity within predefined categories of expenditure, conducted in conjunction with compliance and process monitoring
• Compliance monitoring ensures compliance with regulations and expected results, grant and contract requirements, local governmental regulations and laws, and ethical standards.
• Beneficiary monitoring tracks beneficiary perceptions of a project/programme.
In conclusion, good monitoring and evaluation often depend on good planning. Having a good monitoring and evaluation system and framework can help cut project costs, enhance project success, and ultimately contributes toward economic growth (and recovery). The time is already now to take Monitoring and Evaluation seriously. With all the talks of Green Hydrogen and all the Oil and Gas discoveries, Namibia needs to put proper control and risk mitigation and avoidance measures in place and that includes developing a stringent Monitoring and Evaluation system at both national and interventional levels!