Using chat completions for code generation - Sat, Nov 4, 2023
Using chat completions for code generation
Using chat completion for code generation
In my last example I showed how to use OpenAI’s function calling capabilities to invoke existing python functions in order to satisfy a user request expressed in natural language. In this example I would like to build on that and use the ordinary chat completion endpoint to generate functions directly from requirements expressed in natural language and then invoke them.
The Setup
When using the function calling api, method signatures need to be passed using a predefined json schema . The structure of the result is also defined by the same schema. The general chat completion endpoint however does not mandate prompt or context to comply to a certain schema. So this information needs to be passed differently. Obviously the context seemed to be the right place but it took me some time to figure out how it should look like so that the returned message was more or less deterministic.
Since the OpenAIs API is rapidly evolving, that the functions part of a message has been deprecated. The tool_choice part could be used instead.
Building the right context using few shot prompting
The structure and content of the context prompt are very important in obtaining good results from the chat completion endpoint. Here is a simple example that I started with:
{ "role": "user", "content": """
Given the following python functions:
def get_greeting(name: str) -> str:
return f"Hello name"
def get_customer_name() -> str:
return "John"
Task: Create a function that returns the greeting for a customer
""" }
This very basic example contains two methods. In order to greet a customer the functions get_customer_name
and get_greeting
need to be invoked in succession.
In order to pass the information of the available functions to the api I simply added the source code of as part of the prompt.
A simple helper function
does this. For the task Create a function that returns the greeting for a customer
I was expecting the following code to be returned:
def get_greeting_for_customer() -> str:
return get_greeting(get_customer_name())
During a handful of tries, almost all returned messages did contain similar code, but was always accompanied by more or less extensive explanations / examples and other chit-chat. One technique to get more predictable results is few shoot prompting (see also this example from an OpenAI cookbook). So I prepended some faked few-shot conversations (see here for the all conversations). As a result the returned message was more reliable. In fact in all the examples I ran, the returned message never contained anything but usable code. So the next step was to actually invoke the generated function.
Invoking the function
Note that invoking ai generated code is a possible security risk. The code could be malicious.
Using pythons exec
function it is possible to dynamically execute python code. Since I wanted to add a function to an existing module I had to pass the __dict__
object of the module to the exec
function (Note that I am using the function_calling
module from the previous example).:
# Add the function to the module function_calling
exec(function_code, function_calling.__dict__)
# Get the function from using a string as the name
function = getattr(function_calling, "get_balance_for_customer")
# Call the function and print the result
balance = function("John Doe")
assert balance == 100
The task given in the prompt was:
Task: Create a function named get_balance_for_customer that returns the balance of an account
The parameter of the function is the customers name.
And the generated response contained the following code:
def get_balance_for_customer(name: str) -> float:
account_number = get_account_number(name)
balance = get_balance(account_number)
return balance
So it worked. For the complete code for this rudimentary example can be found here .
Conclusion
So this basic example showed that it is possible to use the chat completion endpoint to create usable functions in an
automated way without using a specially trained model. It also showed how few shoot prompting can increase the predictability
of large language models.
In my next post
I look at how to use this approach to also works with prompting with more general statements like give me the balance of the customer John Doe
. Coming one step closer to a setup where an application features are no longer coded but generated from natural language.