Many companies use Oracle JD Edwards (JDE) World for Enterprise operations, and supporting the application is quite challenging for variety of reasons like End of Standard Support, Outdated technology, limited training, and documentation resources etc. When key skilled professionals leave the organization, the loss of their expertise creates a knowledge gap, making it difficult for the organization to find adequate support. Imagine if we had a feature that could instantly create comprehensive user manuals with just a click!
This paper explores how Generative AI can revolutionize the creation of techno-functional documentation by automatically analyzing the code base. With this approach the Organization can hugely benefit to generate documentations with their code base, without relying on any skilled technical resource.
Maintaining outdated systems presents numerous challenges. The scarcity of professionals skilled in these systems drives up support costs. Additional issues such as integration limitations, security and compliance risks, and scalability further complicate maintenance efforts.
A significant challenge is the shrinking pool of experts in legacy technologies. For instance, a survey revealed that 71% of IT leaders struggle to find developers proficient in legacy languages like COBOL. Older systems are also more prone to failures and downtime, with studies indicating they experience 60% more downtime compared to modern systems.
This whitepaper focuses on Oracle JD Edwards (JDE) World releases, which run on the AS400 platform and face similar support and maintenance challenges. However, leveraging Gen AI can greatly modernize JDE World applications.
The typical process involves using reverse engineering techniques to extract business logic from JDE code, as illustrated below.
Figure 1. Reverse engineering process to extract business logic from JDE code
Develop a process to extract JDE World code from program libraries into .txt files. Once these text files are uploaded to the AI processing area, Gen AI LLM (specifically Azure OpenAI in this document) parses the AS400 code and generates business rules. It employs prompt engineering techniques to create techno-functional documentation based on the input code and other JDE reference documents/user guides.
Retrieval-Augmented Generation (RAG) is a powerful architecture that combines the capabilities of a Large Language Model (LLM) with an information retrieval system to enhance response generation.
LangChain is a framework designed to help developers build applications using large language models (LLMs) like those provided by Azure OpenAI. It simplifies the integration of these models into various applications by offering tools and components that streamline the process.
When using LangChain with Azure OpenAI, you can leverage the same API calls as you would with OpenAI's API. This compatibility makes it easier to switch between or use both platforms.
API Configuration: You can configure the OpenAI Python package to use Azure OpenAI by setting environment variables for the API version, endpoint, and API key.
Authentication: LangChain supports both API Key and Azure Active Directory (AAD) authentication methods.
The AI-generated output will be fragmented, corresponding to the chunks of code it processes. These contents are then consolidated using a Python script that employs the python-docx library to create a Microsoft Word document with headings and titled sections.
AI-generated documentation will not be entirely complete. It will need review, validation, and refinement to achieve the desired accuracy and detail. Once subroutines, program coding formats, and JDE Data dictionaries are explained using prompt engineering, the resulting AI-generated document will be valuable for further study, review, and understanding.
To keep yourself updated on the latest technology and industry trends subscribe to the Infosys Knowledge Institute's publications
Count me in!