I am currently evaluating a number of ways of integrating large language models into my Linux command line. Shell-AI (shai) is one of the easier ones to set up. With Shell-AI, you can simply input your intent in plain English (or other supported languages), and it will suggest single-line commands that achieve your desired outcome. It is designed to work on Linux, macOS, and Windows, though I only tested it on Linux. It’s backed by OpenAI’s GPT LLM – which is problematic for a number of reasons but also means the overall quality of the responses is cutting edge.
- Natural Language Input: Describe what you want to do in plain English (or other supported languages).
- Command Suggestions: Get single-line command suggestions that accomplish what you asked for. Select a suggestion, dismiss or regenerate in-place.
- Cross-Platform: Works on Linux, macOS, and Windows.
Shell-AI result quality
I have thrown a few benchmarks and a few hours of real world use at Shell-AI. As expected, the LLM component, being based by default on
gpt-3.5-turbo (although any OpenAI model can be configured) is top notch. Indeed shai was able to answer most of the questions I would usually have had to Google with reasonable solutions. It also saves time by avoiding the need for copy-pasting and context switching. The surrounding implementation that wraps the GPT-API is decent as well, providing multiple options and making it easy to select one. It asks for confirmation before executing each command. However, it doesn’t feature a built-in option to ask for clarification. For instance, quite often the output will feature a command chain that may be hard to understand. An option to ask GPT for an explanation would be nice, since Shell-AI’s output strips out any of the standard GPT fluff around the actual one-liner code. This means that I found Shell-AI to be a terrible tool for learning and a quite risky one to use at that.
OpenAI Backend issues
Shell-AI uses OpenAI’s GPT AI as a backend. That means:
- You have to have an API key and pay for each call.
- You need to be online at all times.
- There are very serious privacy concerns despite shai itself being FOSS.
- Response times are kinda slow, reducing the overall time-saving effect. With
gpt-3.5-turbowhich is supposed to be the fastest current option, response time is around 8 seconds. You can choose other models, but they will be even slower and the quality gains aren’t really relevant.
While Shell-AI is mildly interesting and it can save time significantly in some situations, I won’t be keeping it around. The main issue for me is privacy, but the poor performance limits overall usefulness as well.