Shape your Mind's actions like setting up dominoes. By arranging and linking components (e.g., Browse Webpage, Google Image Search) together, you enable your Mind to execute multi-step tasks.

What is Workflow?

Workflow lets you drag-and-drop components (e.g., 'Browse Webpage', 'Google Image Search', 'DALL·E') to create a chain of logic, allowing your Mind to follow your own optimized procedure, thinking and planning just like you.

Workflow doesn't just make your Mind more efficient and reliable in everyday tasks, it is also particularly invaluable for professionals wanting to impart industry-specific know-how to their Minds.

For example, imagine you're in digital marketing and want to analyze social media trends. You can create a Workflow that starts with gathering data from web pages using [Browse Webpage], processes it according to the method you specified to identify trends using [LLM], and ends with generating a simple report using [Output to file]. This enables your Mind to execute the task in a way that aligns with your own expertise.

Add Workflow

Want to supercharge your Mind without the heavy lifting? Click [Add from Library], then click [+Add] to give your Mind instant expertise in a variety of fields!

A Glimpse of What's inside the Library 📚

Once you add these Workflows to your Mind, it can perform these tasks during conversations. It's like giving your Mind a PhD in multiple subjects with just clicks. So why not take the plunge? Browse through our library, add some Workflows, and let your Mind dazzle you with its newfound talents!

Let's take a look together to see what changes we can bring to Max's capabilities and performance by adding a 'Industry Reports' Workflow!

For those new here, Max is a Mind we began crafting in the Persona section. And he is dedicated to interpret industry information for users.

Build Workflow

As you embark on building a Workflow within Mind, there are three foundational pillars you need to grasp: Output, Input, and Mapping. Picture your Workflow like a high-tech factory assembly line, where each station (component) has a specific role and the sequence of them matters.

Output: Varied Products of Each Station

On your assembly line, each station/component finishes a task and produces an 'Output' that gets sent down to the next station. The output varies depending on what the station specializes in:

This component outputs an array containing the links and descriptions of the images found in the search.

Mapping: Orchestrator of the Line

Think of 'Mapping' as creating the roadmap for your assembly line. Before you start assembling anything, you need to decide which station passes its products/data to which other station.

To set this up, hover your mouse over the green dot on the right side of a component you consider 'upstream.' When your mouse cursor changes to a cross icon, click and drag a line from this green dot and connect it to the green dot on the left side of the component you consider 'downstream.' And just like that, you've made a connection!

'Mapping' not only helps in organizing the sequence but also in optimizing the flow of data across multiple stages of your Workflow.


A single 'upstream' component can send its output to multiple 'downstream' components. Conversely, a 'downstream' component can also receive different types of output from multiple 'upstream' components.

Input: Selecting the Right Data Source for Each Component

Once your 'Mapping' is set, it's time to be specific about the 'Input' that each component will use. Just like different stations on an assembly line require different types of material to do their specific jobs, different components in a Workflow also need different kinds of 'Input' as their data sources.

For instance, a 'Google Image Search' component might need a search query as its input to look for images. On the other hand, a "Browse Webpage" component will require a URL as its input to read a webpage's content. The type of input you can choose will be listed in a dropdown menu, and these options are determined by what the preceding components have produced.


Each component within the editing interface has its own ID (you can see it next to their names). When selecting the input of a 'downstream' component, please carefully confirm that you've chosen the correct input by comparing IDs.

Understanding these fundamental aspects will arm you with the knowledge you need to create Workflows that are not only effective but also intuitive. Now you're ready to harness the full capabilities of Minds.

SummaLink, one of the most popular Minds on MindOS's Marketplace, can read the content of webpages and deliver structured summaries for you, saving your time browsing. And here's how the 'Summarize Webpage' Workflow is structured:

  1. Start: Capturing the URL First things first, SummaLink needs to know which webpage you're interested in summarizing. To achieve this, you'll add a field called 'URL' to the 'Start' component. Here, SummaLink will extract the URL of the webpage the user is interested in from their conversation, much like how an assembly line needs to first collect its raw materials.

  2. Browse Webpage: Fetching the Webpage Content With the URL in hand, the next job is to actually access and read that webpage. This is where the 'Browse Webpage' component comes in. It will fetch the entire content of the webpage, making it ready for the next phase. Imagine it like the part of the assembly line where the raw materials are unpacked and laid out for assembly.

  3. LLM: Generating the Summary After obtaining the webpage's content, it's time to transform it into a structured summary. Insert a 'LLM (Large Language Model)' component here. The LLM will process the fetched webpage content and generate a structured summary based on the prompts you provide. It's akin to the main manufacturing station where the magic happens and the raw materials become a useful product.

  4. End: Serving the Summary Last but not least, the summary generated by the LLM needs to be presented back to the user. It should be sent to the 'End' component and appears in the user's ongoing conversation with SummaLink. Think of it as the end of the assembly line, where the finalized summary is ready for delivery.


When setting up your 'LLM' component, you'll often need to include specific inputs in your prompts to give LLM the access to those inputs. You can do this easily using curly brackets '{}' to insert inputs dynamically. For example, the prompt in the 'LLM' component of the 'Summarize Webpage' Workflow is:


You should generate a summary of the above content using the following format:

[n Minutes Saved] (Replace 'n' with the amount of time your summary has saved for the reader for not having to read the above content.)

[Summary with emoji here] (consisted of no more than 5 bulleted points in markdown format. Each of the bulleted points must be ended with an emoji)

By using '{}' to encapsulate the input names, you're telling the LLM exactly how to handle each piece of incoming data.

Introduction to commonly used components



The LLM component serves as a pivotal computational unit within your Workflow. Unlike traditional programming approaches that rely on hard-coded logic and programs, the LLM component simplifies this by harnessing the capabilities of Large Language Models like GPT. With just natural language prompts, you can teach your Mind to process a wide range of text-based data in your desired manner.

Model Selection: GPT3.5-Turbo vs. GPT3.5-Turbo-16k

When adding the LLM component, users have a choice between two models: GPT3.5-Turbo and GPT3.5-Turbo-16k.

GPT3.5-Turbo: This is a powerful, general-purpose language model suitable for a wide array of tasks.

  • Advantages: Faster and more cost-effective.

  • Disadvantages: Limited by a smaller token limit, meaning it might truncate very long texts.

GPT3.5-Turbo-16k: This model has a much larger token limit, allowing for longer text passages or more turns in a conversation.

  • Advantages: Capable of handling much longer texts, useful for detailed analysis or complex tasks.

  • Disadvantages: More resource-intensive, which might incur higher costs and slightly slower response times.

Advanced Configuration: Max_token, Temperature, and topP

Max_token: This defines the maximum number of tokens (words, sub-words, or characters) in the output. Lower values make the output more concise, while higher values allow for more detailed responses.

Temperature: Ranging from 0 to 1, a lower temperature (e.g., 0.2) makes the output more focused and deterministic, while a higher temperature (e.g., 0.8) makes it more random and creative.

topP: This value also ranges from 0 to 1 and controls the diversity of the output. A lower value (e.g., 0.2) makes the model stick closely to the most likely output, while a higher value (e.g., 0.8) allows for more diversity.

Prompt Template Best Practice

In the "Prompt Template" field, users can craft their natural language prompts to guide the model. It's crucial to quote the input of the LLM component using curly braces like {input} to specify what data the model should process. For example, if you want the model to summarize an article, your prompt might look like: "Please summarize the following article: {article_content}."


The Input component serves as a flexible data entry point within your Workflow, capturing user prompts or other text data that can be fed into subsequent components. Unlike other mainline components, it operates in the background to enhance the specialized performance of targeted nodes in your Workflow.

By strategically using the Input component, you can initialize your Workflow with the precise data or queries it needs, enabling a more tailored and efficient execution of tasks.


The Code component acts as the reliable workhorse of your Workflow, offering precise data manipulation capabilities like regular expressions, data splitting, and result concatenation. While it demands some Python programming expertise for configuration, its stability and precision make it an invaluable tool for specialized tasks. If coding isn't your forte, you can always have the alternative of utilizing the versatile LLM component for data processing.

Webpage Browsing

The Webpage Browsing component fetches the entire content of a webpage specified by a given URL. It serves as a foundational step for downstream components that may require webpage content for various tasks like data analysis or content extraction.

LLM for Arrayed Output

The LLM for Arrayed Output component uses Large Language Models to process specified input parameters and returns an array of results organized according to the guidelines set in your prompt. It's particularly useful for tasks like search keyword expansion or organizing textual data into discrete categories for further analysis.

Batched Search

The Batched Search component elevates your Workflow's research capabilities by allowing you to input an array of up to 10 different search keywords. Unlike the 'Google Search' component, which takes a single keyword, 'Batched Search' broadens the scope of your information gathering. It returns a list of search results for each keyword, creating a richer dataset for downstream components to process.

Batched Webpage Browsing

The Batched Webpage Browsing component is designed to simultaneously crawl multiple webpages, provided you feed it an array of URLs as its input. This allows for efficient gathering of webpage content in scenarios where single-page browsing won't suffice. When paired with components like Batched LLM, it becomes a powerful tool for analyzing large sets of web content in an organized way.

Batched LLM

The Batched LLM component is specifically designed to handle the multi-array outputs generated by the Batched Browsing component. Unlike a standard LLM component, which may struggle to process multiple arrays in a structured manner, the Batched LLM can individually analyze each array—consisting of elements like link, title, and content—to produce more structured and accurate results. Just like in the LLM component, you'll need to specify key parameters in each array using curly braces ({example}) in the 'Prompt Template'. This ensures the model accurately processes each part of the complex data structure it receives.



This node extracts key fields from the user's conversation with the Minds, then starts the entire workflow by inputting these parameters to nodes after it.


This node gathers key results and information generated by the previous nodes and ends the entire workflow by outputting these fields to the Mind's expression.

Output to file

Workflow has the potential to handle very complex and rich information sources. Sometimes the generated results will have multiple dimensions or sections, and such results may be best presented in an independent, well-structured Markdown file. In the Request Body, choose the node processing result that you want the Mind to output to file.


The DALL·E node is connected to the large painting model, which can draw pictures based on the input prompt. Enter the processing result of the previous node in the Request Body as the painting prompt.

Display Tips

Workflows can generate complex content, such as analysis reports, news summaries, etc. To make this content more readable to users, you may want parts of the content to be presented in the form of cards in the conversation. Click on the [⚙️ icon] in the Workflow configuration interface, and click [Display], to specify the content that the Mind should display when invoking the Workflow. And present key URLs, text, or multiple fields in the form of cards in the conversation.

Last updated