By nature, sharing information within the intelligence community is a challenge. Intelligence professionals are often frustrated by having to “reinvent the wheel” in order to meet an intelligence requirement. Very often, the solution already exists, but is buried somewhere in a senior intelligence officer’s mind — barred by security classifications and compartmentalization.
I kept returning to this issue of information sharing as I neared the end of my master’s degree in Intelligence Studies from American Military University. I spent a considerable amount of time reworking the research question for my final thesis. Finally, I decided to investigate how the intelligence community could do a better job capturing the experience and knowledge of analysts and passing that on to the next generation of intelligence officers like myself.
[Related: When Intelligence Assessments Conflict]
As part of my thesis research, I turned to my own professional experience as a Naval Intelligence Officer with an Information Warfare designation. For now, I work as an EA-18G Growler Tactics Instructor at the Electronic Attack Weapons School (EAWS). My days are often spent instructing and evaluating junior intelligence officers on tactical analysis for Airborne Electronic Attack, which is the use of electromagnetic radiation to deny an enemy’s use of critical radar and communications systems.
A New Model to Assess the Success of Intelligence Operations
At the end of every week of instruction at EAWS, I sit down and evaluate the intelligence division I’ve just instructed using a series of grade sheets. The EAWS grade sheet rates different qualities on a scale of one to five and then cumulates those scores into section averages, which are used to tell squadron commanders where their crews are excelling and where improvements can be made. As part of my thesis research, I decided to use this same approach as a model to help me rate and analyze the success of intelligence operations.
With my professor’s help, I selected four classic intelligence operations to analyze:
- Operation TORCH was the Allied invasion of North Africa during World War II, which relied on submarines as a critical component in surveying amphibious landing sites and in guiding the landing craft.
- Operation MINCEMEAT was an Allied deception operation conducted by the British in which a corpse laden with misinformation was delivered off the coast of Spain in order to mislead German intelligence officers there as to the Allied invasion plans for Italy.
- Project COLDFEET was a joint U.S. Navy and CIA operation dedicated to the covert insertion and extraction of intelligence personnel to a remote, abandoned soviet Arctic research station which was conducting sonar research in support of Soviet submarine operations under the arctic ice pack.
- Project AZORIAN was a joint U.S. Navy and CIA operation that was intended to recover and exploit a sunken Soviet submarine from the bottom of the Pacific Ocean.
Each of the four operations shared unifying themes (naval intelligence, operational tradecraft, and submarines), making them similar enough that a like-terms comparison could be achieved. My objective was to create a set of evaluation criteria for each operation, which would then enable an “apples to apples” comparison of each. This step was critical because no two intelligence operations are ever exactly alike.
[Related: Is Intelligence an Art or a Science?]
Evaluation Process
Using the modified EAWS grade sheet to evaluate the four intelligence operations, I noticed several patterns that went against what I originally predicted. I had thought (incorrectly) that operations planned very broadly with more possible avenues for success would naturally be the most successful. But surprisingly, this was the approach taken by the two least successful operations, TORCH and AZORIAN. Both had grown in scope until they were unwieldy; it was too difficult to manage such large groups of people and assets effectively.
On the other hand, the two most successful operations, MINCEMEAT and COLDFEET, were very small in scope. Meticulously planned and managed, each operation achieved a specific, limited goal with comparatively small planning teams and outlay of resources. These outcomes led me to create a new metric for evaluating intelligence operations – the Resource/Reward/Risk ratio. By considering these three elements in concert, intelligence officers and mission planners can better manage their planning process and finally benefit from the codified wisdom developed during previous operations.
[Related: Four Ways to Start an Intelligence Career]
The final chapter of my thesis focused on analyzing the results of each graded operation and articulating the lessons learned from each. These lessons were described as planning and execution recommendations:
Intelligence planners should carefully consider the risk-resource-reward ratio when proposing operations.
- The United States Intelligence Community has a documented, demonstrated bias toward complex technical solutions executed at great expense.
- Technical solutions leverage U.S. technical competency, but require significant investments of time, money, and manpower.
Limit the scope of operations to only that which is immediately necessary to answer Commander’s Critical Information Requirements (CCIRs) or to accomplish the objective.
- Define the desired end-state in specific, concrete terms.
- Actively limit the addition, but not the evolution, of operational goals
Involve mission experts in every step of the operation, from the inception to completion.
- No element of the operation should be without its representative expert.
- Each mission area expert should have input into the conduct of their respective element of the operation.
Enable and require communication throughout the planning and execution process.
- It is not enough to include mission experts – they must also communicate effectively with each other.
- When there is a conflict between the requirements of the operation, and the limitations of a key capability, the expert representing that key capability should have final say.
Creative, cunning and clever human solutions are often far superior to technical ones in nearly every respect.
Comprehensive after-action reports composed by operation participants should be required following every operation. These reports should be classified to the lowest level possible.
As demonstrated by the comprehensive success of MINCEMEAT and COLDFEET, there are no insignificant variables in planning, execution, or after-action assessment.
The Takeaway
At the conclusion of my research, I was able to identify and articulate a training gap and make steps toward correcting it on a very small scale. The main points that should be emphasized are attention to every detail, leveraging of experts from operationally relevant communities and limited scope of operational goals. Further research involving more metric categories and additional case-studies will very likely provide additional insight which would be invaluable to junior intelligence officers.
Ideally, the training for future intelligence officers will incorporate reviews of case studies in order to derive relevant knowledge as a necessary substitute for field experience. After all, no one lives long enough to make every mistake themselves – some knowledge must be gained from others’ experience.
Comments are closed.