Unlocking the Power of Computational Thinking for LLM Success
Written on
Chapter 1: The Importance of Computational Thinking
In the past year, as we spearheaded the creation of a GenAI-as-a-service platform for enterprises, we've faced numerous inquiries, such as “What possibilities exist for …” and “Can LLM accomplish …”
This blog post will explore a vital competency that will help you respond to these queries more effectively: computational thinking. By the conclusion of this post, you will learn:
- What computational thinking entails
- Its significance in developing LLM applications
- A four-step approach to integrate computational thinking into LLM use case development
What is Computational Thinking?
Computational thinking is a strategic problem-solving method that decomposes tasks into what I refer to as atomic tasks. It involves crafting a systematic algorithmic approach to address issues, recognizing patterns and inefficiencies, and assessing the importance of each step.
Consider the process of preparing a meal.
The steps involved—finding recipes, shopping for ingredients, preparing those ingredients, cooking, and plating—represent the atomic tasks that form a cohesive unit of action. By dissecting a complex task into these smaller components, we can clarify the workflow. For instance, if you know you'll need to add sauce later, it’s wise to prepare it in advance, keeping it within easy reach. This foresight prevents the last-minute scramble for ingredients, ensuring a smoother cooking experience.
Moreover, computational thinking helps us identify transferable skills. If you need to chop chives for garnish and celery for a dish, both require similar techniques and tools. This insight can help streamline the process and eliminate redundancies.
This mindset also aids in planning the time and effort necessary for each step, enhancing predictability and robustness. For example, understanding how long it takes to heat a pan, chop ingredients, prepare a sauce, and cook pasta can guide you to prioritize tasks effectively. By employing computational thinking, we can better manage our timelines, reduce idle moments, and satisfy appetites more quickly.
How Does This Relate to LLMs?
Imagine you're asked to explore the potential of using an LLM to analyze an annual report. Many stakeholders may envision the LLM merely reading the document and producing the desired output with a simple prompt like “Extract KPIs from the document.”
However, without context regarding how to extract this information, where to locate it, and how to format the output, such instructions are not particularly useful. This scenario presents a classic needle-in-a-haystack challenge. While advanced LLMs can tackle such complex tasks (as highlighted in a tweet by Alex), simplifying the problem can enhance the clarity and effectiveness of the LLM’s output.
To tackle this, we can break the task into smaller, manageable components:
- Extract the table of contents
- Identify relevant sections
- Gather key information
- Adjust the tone and formatting of the response
This structured approach clarifies the workflow, enabling business stakeholders to comprehend and contribute to the process rather than relying on the LLM as a mysterious black box.
Four-Step Process for Implementation
Given that we cannot be experts in every field, it would be unrealistic to apply computational thinking to every LLM task. Below is a four-step method that has allowed us to develop over ten prototypes in just one month.
Step 1: Build on Existing Knowledge
In any organization, if there’s a demand, it's likely that someone has already tackled that problem, perhaps manually. The first step in applying computational thinking to an LLM prototype is to interview or shadow these individuals. Avoid commenting on the efficiency of their methods; focus instead on understanding their workflow, which works in some capacity. This information serves as a blueprint for constructing your LLM use case, making your problem statement more manageable.
Step 2: Identify Underlying Assumptions
Inquire why certain steps are necessary in the workflow. As domain experts, we often take our specialized knowledge for granted. These implicit assumptions can create barriers to understanding and replicating processes. We cannot simply state that the LLM is inadequate without considering the assumptions that experienced analysts have developed over years. Cataloging these hidden conditional branches is crucial for ensuring LLMs generate useful outputs.
Step 3: Operationalize the Blueprint
Once you have outlined the explicit and implicit steps that an expert would follow, you need to consider how to integrate these elements. Ask yourself: How critical is each step? Do we need to document every output? How much reasoning is required for each task? The answers may lead you to either consolidate tasks into a single LLM prompt or separate them into distinct prompts for more detailed outputs.
Step 4: Refine the Blueprint
By this stage, you should have a functioning LLM pipeline to test. This is the moment to iterate on innovative ideas and identify ways to streamline the process. Ask yourself if tasks like table of contents extraction and KPI extraction can be generalized into a single prompt. Evaluate whether every step, such as reading page numbers, is necessary.
Final Thoughts
Don’t hesitate to experiment with different models, prompt structures, or even consider alternatives to LLMs. There is no one-size-fits-all solution. As you refine your blueprint, remember that if the output quality meets the needs of end users and stakeholders, they will typically favor simpler, more cost-effective options.
And while you’re at it, consider documenting your decision-making process. You might even create a digital twin capable of automating these tasks in the future!
If you’re curious about the thought processes behind these strategies and wish to accelerate your data science career, check out my blog post detailing my five realizations and 15 lessons for data scientists.
The first video titled "Computational Thinking" provides insights into the concept and its significance.
The second video, "Computational Thinking: What Is It? How Is It Used?" explores practical applications and benefits of computational thinking.
Before you leave, I’d love to hear your thoughts on the following:
- What intriguing LLM use cases have you recently encountered?
- Have you attempted to uncover implicit assumptions using LLMs?
- What has been your experience with utilizing different LLMs for tasks of varying complexity?
- When do you think a chain of thought becomes too complex to split into separate prompts?
Feel free to share your responses in the comments or connect with me on LinkedIn!
Until next time, this is Louis.
Louis Chan | Lead GCP Data & ML Engineer | Associate Director | KPMG UK | LinkedIn
? GenAI Lead | Lead GCP Data & ML Engineer | Top Writer on Artificial Intelligence …