Behind the Build: AI Learning Assistant

4 months ago 331
ARTICLE AD

Talking through your coding questions or roadblocks is one of the best ways to overcome them. And while a quick chat with your teammate or a rubber duck typically does the trick, there’s a faster and more interactive way to make progress and work through problems as you learn: the AI Learning Assistant. 

We’ve combined our AI-powered learning tools into one entity that you can chat with and receive personalized guidance in a conversational way. The AI Learning Assistant can correctly interpret how much progress you’ve made in a particular exercise and generate unique advice that’s specific to your individual learning journey. Instead of using a traditional search engine or prompting ChatGPT every time you have a question, the AI Learning Assistant contextualizes its response based on where you are in an exercise and what code you’ve written. 

You can think of the AI Learning Assistant as a study buddy or tutor that you can use as a sounding board. “In a classroom setting, you can cast your doubts, give your thoughts, or spark conversations that lead to different ways of learning,” says Jatin Parab, Senior Software Engineer at Codecademy. “This now unlocks the conversational nature that we lack in online learning.”

The AI Learning Assistant is available for all learners to try now. Ahead we spoke to Jatin and a few of our engineers who helped bring this feature to life about the process of building an AI-powered assistant. 

The project: Build an AI-powered assistant in the learning environment.   

Initially, our engineers were “caught in a bit of a chicken-and-egg situation,” Jatin says. The AI Learning Assistant is highly contextual, so the feature needed to be in place in order to experiment with it effectively. But to define the feature’s requirements, we needed to interact with it. This created a loop: to build the feature, we needed requirements, and to define the requirements, we needed the feature.  

The solution? “We decided to develop a prototype interface, essentially a stripped-down version of our learning environment,” Jatin says. “This provided us with a place where we could play, experiment, and refine the prompts.” The tasks associated with this launch included: 

Validate whether having an AI assistant in the learning environment is worthwhile  Establish a prototype interface to experiment with the learning environment  Combine all our AI features within the learning environment  Get the model to behave in a certain way and scale it 

Investigation and roadmapping 

Jatin: “After establishing the testing interface, the Engineering Team focused on building out the features, while the UX and Curriculum Teams concentrated on refining the system prompts and ensuring their alignment with the learning objectives.

I can discuss how we divided the engineering tasks. Basically, there were three main pillars: first, there was the UI aspect; second, there was configuring the model to respond in specific ways, essentially managing its responses; and thirdly, there was handling chat history and managing user restrictions. 

Highlight your code, then click the “Explain code” button, and the AI Learning Assistant will explain the selected snippet to you.

Drawing from my previous experience working with generative AI technologies in my previous job and personal projects, I took on the responsibility of getting the assistant and the model operational. Another team member tackled the UI integration aspects and also worked on managing the chat history. 

A significant portion of the project involved crafting the right prompts. This entailed extensive testing of different prompt versions and scenarios. We also had help from the Curriculum Team and the UX Team to test various scenarios that learners might encounter. The curriculum team had a better understanding of the kinds of questions learners might ask, which wasn’t directly within my skill set.”

Implementation  

Jatin: “I mentioned the testing interface that we built, which was a prototype. For this, we utilized a library called Streamlit. Streamlit is a Python library that enables UI development with back-end code, eliminating the need to separately develop a front-end application. This approach facilitated rapid development as we could create a UI using back-end code, thus streamlining the process. 

Additionally, we leveraged another technology called LangChain, a well-known open-source project focused on generative AI technologies. We opted for LangChain because it offers a range of built-in tools tailored for common use cases in generative AI, such as building agents and question answer systems.”

Troubleshooting 

Jatin: “One challenge we faced involved tweaking the AI Learning Assistant’s responses to avoid providing direct answers. This proved to be quite difficult because GPT is built to be solution-driven — it tends to provide the best possible help and answers. However, our objective was to encourage learners to engage with the material actively rather than simply receive answers. 

The way we solved it was by providing the AI Learning Assistant with examples of how to handle specific chat interactions. For instance, if a learner asked a particular question, we instructed the AI Learning Assistant to first give a hint, and if the learner if the learner is persistent, then you can give the correct answer. That was an aha moment for us.

This now unlocks the conversational nature that we lack in online learning.

Jatin Parab

Senior Software Engineer at Codecademy

We wanted the learners to be able to solve their doubts and questions related to the course they are taking currently, but we also wanted to encourage conversations around tangential topics, like something related to the course or what actions to take next. We also wanted the assistant to be aware of the latest technologies and developments in the industry and be able to use our resources that we have developed over the years, like articles and cheatsheets. That is why we decided to use the RAG [retrieval augmented generation] vector database.  

Getting that right was kind of difficult, because we didn’t use RAG or vector database anywhere in our systems before, so we were starting from scratch. We had this entire process of working with different databases, trying it out, seeing what works best for our case, and then also deciding what goes into the database.”  

Ship 

Chirag Makkar, Senior Software Engineer at Codecademy: “For me personally, when I saw the assistant [processing] on its own how to lead the student to a particular answer, and not directly give out solutions, it was like the perfect aha moment for me. We actually were able to teach the assistant how to answer the questions and not answer the questions as well.”  

Shivam Arora, Senior Software Engineer at Codecademy: “Seeing that the assistant has context was a very big aha moment for me. Because it has the context of like, which checkpoint we are on in which lesson we are using. It does not go out of scope of that. So, if you’re asking something and you’re in Python, it will not give you an answer in HTML.”  

Retrospective  

Jatin: “After we presented a demo [to the Codecademy team], we had a lot of team members reach out to us and ask what kind of technologies we used and how we exactly did it. Another team happened to be doing their own research on AI assistants, and the choice of tools that they picked was similar to what we picked. So it was very interesting to see that both engineering teams zoned in on the same set of tools to solve the problem. 

Chirag: “These days, these [AI] tools are pretty common — even in VS Code, people use Codeium, GitHub Copilot, or other things. This is another realistic way of learning. It’s not really about how deep you can get into a particular technology or a language, it’s actually about how quickly you can build projects, learn things, and move ahead.”  

Snaps 

Chirag: “Major snaps to Jatin for leading the whole system through because he was actually the brains behind our own GPT and getting it right. Knowing how to inject data and context into our own version of GPT was really challenging for us to do as a team.”   

Conversation has been edited for clarity and length.

Subscribe for news, tips, and more

Read Entire Article