Home > Programming > Stata commands to run ChatGPT, Claude, Gemini, and Grok

Stata commands to run ChatGPT, Claude, Gemini, and Grok

I wrote a blog post in 2023 titled A Stata command to run ChatGPT, and it remains popular. Unfortunately, OpenAI has changed the API code, and the chatgpt command in that post no longer runs. In this post, I will show you how to update the API code and how to write similar Stata commands that use Claude, Gemini, and Grok like this:

. chatgpt "Write a haiku about Stata."

Data flows with ease,
Stata charts the silent truths,
Insights bloom in code.

. claude "Write a haiku about Stata."

Here is a haiku about Stata:

Stata, my old friend
Analyzing data with ease
Insights ever found

. gemini "Write a haiku about Stata."

Commands flow so fast,
Data shaped, models defined,
Insights now appear.

. grok "Write a haiku about Stata."

Data streams unfold,
Stata weaves the threads of truth,
Insights bloom in code.

The focus of this post, like the previous one, is to demonstrate how easy it is to take advantage of the PyStata features to connect to ChatGPT and other AI tools rather than to give advice on how to use AI tools to answer Stata-specific questions. Therefore, the examples I show simply ask for a haiku about Stata. However, you could pass any request that you would find helpful in your Stata workflow.

Review of Stata/Python integration

I will assume that you are familiar with Stata/Python integration and how to write the original chatgpt command. You will want to read the blog posts below if these topics are unfamiliar.

Updating the ChatGPT command

You will need an Open AI user account and your own Open AI API key to use the code below. I was unable to use my old API key from 2023, and I had to create a new key.

You will also need to type shell pip install openai in the Stata Command window to install the Python package openai. You may need to use a different method to install the openai package if you are using Python as part of a platform such as Anaconda. I had to type shell pip uninstall openai to remove the old version and type shell pip install openai to install the newer version.

Next we will need to replace the old Python code with newer code using the modern API syntax. I typed python function to prompt chatgpt through api into a search engine that led me to the Developer quickstart page on the OpenAI website. Some reading followed by trial and error resulted in the Python code below. The Python function query_openai() sends the prompt through the API, uses the “gpt-4.1-mini” model, and receives the response. I did not include any options for other models, but you could change the model if you like.

The remaining Python code does three things with the response. First, it prints the response in Stata’s Results window. Second, it writes the response to a file named chatgpt_output.txt. And third, it uses Stata’s SFI module to pass the response from Python to a local macro in Stata. The third step works well for simple responses, but it can lead to errors for long responses that include nonstandard characters or many single or double quotations. You can place a # character at the beginning of the line “Macro.setLocal(…” to comment out that line and prevent the error.

You can save the code below to a file named chatgpt.ado, place the file in your personal ado-folder, and use it like any other Stata command. You can type adopath to locate your personal ado-folder.


capture program drop chatgpt
program chatgpt, rclass
    version 19.5 // (or version 19 if you do not have StataNow)
    args InputText
    display ""
    python: query_openai("`InputText'", "gpt-4.1-mini")
    return local OutputText = `"`OutputText'"'
end
    
python:
import os
from openai import OpenAI
from sfi import Macro
    
def query_openai(prompt: str, model: str = "gpt-4.1-mini") -> str:
    # Pass the input string from a Stata local macro to Python
    inputtext = Macro.getLocal('InputText')

    # Enter your API key
    client = OpenAI(api_key="PASTE YOUR API KEY HERE")

    # Send the prompt through the API and receive the response
    response = client.chat.completions.create(
        model= model,
        messages=[
            {"role": "user", "content": inputtext}
        ]
    )

    # Print the response in the Results window
    print(response.choices[0].message.content)

    # Write the response to a text file
    f = open("chatgpt_output.txt", "w")
    f.write(response.choices[0].message.content)
    f.close()

    # Pass the response string from Python back to a Stata local macro
    Macro.setLocal("OutputText", response.choices[0].message.content)
end

Now we can run our chatgpt command and view the response in the Results window.

. chatgpt "Write a haiku about Stata."

Data flows with ease,
Stata charts the silent truths,
Insights bloom in code.

We can type return list to view the response stored in the local macro r(OutputText).

. return list

macros:
         r(OutputText) : "Data flows with ease, Stata charts the silent truths, Insights bloom .."

And we can type type chatgpt_output.txt to view the response saved in the file chatgpt_output.txt.

. type chatgpt_output.txt
Data flows with ease,
Stata charts the silent truths,
Insights bloom in code.

It worked! Let’s see whether we can use a similar strategy to create a Stata command for another AI model.

A Stata command to use Claude

Claude is a popular AI model developed by Anthropic. Claude includes an API interface, and you will need to set up a user account and get an API key on its website. After acquiring my API key, I typed python function to query claude api, which led me to the Get started with Claude website. Again, some reading and trial and error led to the Python code below. You will need to type shell pip install anthropic in Stata’s Command window to install the anthropic package.

Notice how similar the Python code below is to the Python code in our chatgpt command. The only major difference is the code that sends the prompt through the API and receives the response. Everything else is nearly identical.

You can save the code below to a file named claude.ado, put the file in your personal ado-folder, and use it just like any other Stata command.


capture program drop claude
program claude, rclass
    version 19.5 // (or version 19 if you do not have StataNow)
    args InputText
    display ""
    python: query_claude()
    return local OutputText = `"`OutputText'"'
end
    
python:
import os
from sfi import Macro
from anthropic import Anthropic
    
def query_claude():
    # Pass the input string from a Stata local macro to Python
    inputtext = Macro.getLocal('InputText')

    # Enter your API key
    client = Anthropic(
        api_key='PASTE YOUR API KEY HERE'
    )

    # Send the prompt through the API and receive the response
    response = client.messages.create(
        model="claude-3-haiku-20240307",
        max_tokens=1000,
        messages=[
            {"role": "user", "content": inputtext}
        ]
    )

    # Print the response to the Results window
    print(response.content[0].text)

    # Write the response to a text file
    f = open("claude_output.txt", "w")
    f.write(response.content[0].text)
    f.close()

    # Pass the response string from Python back to a Stata local macro
    Macro.setLocal("OutputText", response.content[0].text)

end

Now we can run our claude command and view the response.

. claude "Write a haiku about Stata."

Here is a haiku about Stata:

Stata, my old friend
Analyzing data with ease
Insights ever found

We can type return list to view the response stored in the local macro r(OutputText).

. return list

macros:
         r(OutputText) : "Here is a haiku about Stata: Stata, my old friend Analyzing data with ea.."

And we can type type claude_output.txt to view the response saved in the file claude_output.txt.

. type claude_output.txt
Here is a haiku about Stata:

Stata, my old friend
Analyzing data with ease
Insights ever found

You may sometimes see an error like the one below. This does not indicate a problem with your code. It is telling you that the API service or network has timed out or has been interrupted. Simply wait and try again.

  File "C:\Users\ChuckStata\AppData\Local\Programs\Python\Python313\Lib\site-packages\anthropic\
> _base_client.py", line 1065, in request
    raise APITimeoutError(request=request) from err
anthropic.APITimeoutError: Request timed out or interrupted. This could be due to a network timeout, 
> dropped connection, or request cancellation. See https://docs.anthropic.com/en/api/errors#long-requests
> for more details.
r(7102);

A Stata command to use Gemini

Gemini is a popular AI model developed by Google. Gemini also includes an API interface and you will need to set up a user account and get an API key on its website. After acquiring my API key, I typed python function to query gemini api, which led me to the Gemini API quickstart website. Again, some reading and trial and error led to the Python code below. You will need to type shell pip install -q -U google-genai in Stata’s Command window to install the google-genai package.

Again, you can save the code below to a file named gemini.ado, put the file in your personal ado-folder, and use it just like any other Stata command.


capture program drop gemini
program gemini, rclass
    version 19.5 // (or version 19 if you do not have StataNow)
    args InputText
    display ""
    python: query_gemini()
    return local OutputText = `"`OutputText'"'
end
    
python:
import os
from sfi import Macro
from google import genai
    
def query_gemini():
    # Pass the input string from a Stata local macro to Python
    inputtext = Macro.getLocal('InputText')

    # Enter your API key
    client = genai.Client(api_key="PASTE YOUR API KEY HERE")

    # Send prompt through the claude API key and get response
    response = client.models.generate_content(
        model="gemini-2.5-flash", contents=inputtext
    )

    # Print the response to the Results window
    print(response.text)

    # Write the response to a text file
    f = open("gemini_output.txt", "w")
    f.write(response.text)
    f.close()

    # Pass the response string from Python back to a Stata local macro
    Macro.setLocal("OutputText", response.text)
end

Now we can run our gemini command and view the response.

. gemini "Write a haiku about Stata."

Commands flow so fast,
Data shaped, models defined,
Insights now appear.

We can type return list to view the response stored in the local macro r(OutputText).

. return list

macros:
         r(OutputText) : "Commands flow so fast, Data shaped, models defined, Insights now appea.."

And we can type type gemini_output.txt to view the response saved in the file gemini_output.txt.

. type gemini_output.txt
Commands flow so fast,
Data shaped, models defined,
Insights now appear.

A Stata command to use Grok

OK, one more just for fun. Grok is another popular AI model developed by xAI. You will need to set up a user account and get an API key on its website. After acquiring my API key, I typed python function to query grok api, which led me to the Hitchhiker’s Guide to Grok website. Again, some reading and trial and error led to the Python code below. You will need to type shell pip install xai_sdk in Stata’s Command window to install the xai_sdk package.

Once again, you can save the code below to a file named grok.ado, put the file in your personal ado-folder, and use it just like any other Stata command.


capture program drop grok
program grok, rclass
    version 19.5 // (or version 19 if you do not have StataNow)
    args InputText
    display ""
    python: query_grok("`InputText'", "grok-4")
    return local OutputText = `"`OutputText'"'
end
    
python:
import os
from sfi import Macro
from xai_sdk import Client
from xai_sdk.chat import user, system
    
def query_grok(prompt: str, model: str = "grok-4") -> str:
    # Pass the input string from a Stata local macro to Python
    inputtext = Macro.getLocal('InputText')

    # Enter your API key
    client = Client(api_key="PASTE YOUR API KEY HERE")

    # Send prompt through the claude API key and get response
    chat = client.chat.create(model=model)
    chat.append(user(inputtext))
    response = chat.sample()

    # Print the response to the Results window
    print(response.content)

    # Write the response to a text file
    f = open("grok_output.txt", "w")
    f.write(response.content)
    f.close()

    # Pass the response string from Python back to a Stata local macro
    Macro.setLocal("OutputText", response.content)
end

Now we can run our grok command and view the response in the Results window.

. grok "Write a haiku about Stata."

Data streams unfold,
Stata weaves the threads of truth,
Insights bloom in code.

We can type return list to view the reply stored in the local macro r(OutputText).

. return list

macros:
         r(OutputText) : "Data streams unfold, Stata weaves the threads of truth,  Insights b.."

And we can type type grok_output.txt to view the results in the file grok_output.txt.

. type grok_output.txt
Data streams unfold,
Stata weaves the threads of truth,
Insights bloom in code.

Conclusion

I hope the examples above have convinced you that it is relatively easy to write and update your own Stata commands to run AI models. My examples were intentionally simple and only for educational purposes. But I’m sure you can imagine many options that you could add to allow the use of other models for other kinds of prompts, such as sound or images. Perhaps some of you will be inspired to write your own commands and post them on the web.

Those of you who read this post in the future may find it frustrating that the API syntax has changed again and the code above no longer works. This is the nature of using APIs. They change and you will have to do some homework to update your code. But there are many resources available on the internet to help you update your code or write new commands. Good luck and have fun!