You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Executive Summary

Imagine an EDA tool that can respond to prompts like “design a battery bank with physical dimensions of 5”x1”x3” that can stored up to 12000mAH with USB charging for less than $20 USD of BOM”, and then output several design options as working schematics/layout files ranked according to selectable factors such as power consumption, cost, dimensions, and suppliers.   

Far fetched?  With modern Deep Learning (DL) and Large Language Model (LLM), this is not a pipe-dream, but a reality.  Consider how Chat GPT is able to transform textual prompts into meaningful responses by training LLM on vast amounts of text from online books, articles, news.  There is no reason why LLM cannot be trained on vast amounts of schematics files, PCB files, 3D model files, and generate working schematics and PCB designs when given textual prompts.  After all, schematics and PCB designs, components files and the relationships among them, are usually encoded in annotated XML files, a different form of language. 

Now, there is only a handful of companies possessing enough design data to train a robust LLM, not to mention having a community that will continue to produce new data, and Renesas has one of them in Altium.  Therefore, Renesas should capitalize on this golden opportunity to make this tool a reality.


The sections that follow offer information on Altium design files generated for the most part by Chat GPT with the prompt: "How is Altium schematics file structured?It serves as a reminder the power of LLM to generate meaningful answers when given only a simple prompt.


1.0  Altium schematic file is expressed as a language in XML

Altium schematics are captured as XML files with (.SchDoc) extension.  The file is structured to represent an electronic circuit containing various elements, such as components, nets, symbols, and annotations.  Below is an example illustrating Altium schematic file structure:

<AltiumSchematicDocument>
  <Header>
    <Version>1.0</Version>
    <Author>John Doe</Author>
    <Title>My Schematic</Title>
  </Header>
  <Components>
    <Component>
      <Designator>R1</Designator>
      <Type>Resistor</Type>
      <Value>10k</Value>
      <Pins>
        <Pin Number="1" Connection="NET1"/>
        <Pin Number="2" Connection="NET2"/>
      </Pins>
    </Component>
    <!-- More components -->
  </Components>
  <Nets>
    <Net Name="NET1">
      <Connection Pin="R1.1" />
      <Connection Pin="U1.2" />
    </Net>
    <!-- More nets -->
  </Nets>
  <Connections>
    <Wire Start="R1.1" End="U1.2"/>
    <!-- More connections -->
  </Connections>
</AltiumSchematicDocument>


2.0 Altium component libraries are expressed in XML

Altium schematics is really a graph relationship connecting components that are each in themselves an XML file.  Here is an example:

Component File Format
<AltiumComponentLibrary>
  <Library>
    <Component>
      <Name>R1</Name>
      <Type>Resistor</Type>
      <Attributes>
        <Value>10k</Value>
        <Footprint>Resistor_SMD_0805</Footprint>
      </Attributes>
      <Pins>
        <Pin Number="1" Type="Electrical" Direction="Input" />
        <Pin Number="2" Type="Electrical" Direction="Output" />
      </Pins>
      <Symbol>
        <Shape Type="Rectangle" Position="(0, 0)" Width="10" Height="10"/>
        <!-- More graphical elements defining the symbol -->
      </Symbol>
    </Component>
    <!-- More components -->
  </Library>
</AltiumComponentLibrary>


3.0  Large Language Model (LLM)

A Large Language Model (LLM) is a type of artificial intelligence (AI) model designed to process and generate human-like text based on vast amounts of data. LLMs are a subclass of natural language processing (NLP) models, which aim to understand, interpret, and generate human language. These models are "large" because they are trained on enormous datasets, often containing billions or even trillions of words, and they have millions to billions of parameters (the internal variables the model uses to make predictions).

LLMs are trained on diverse and extensive datasets, which can include books, websites, articles, and other text-based content. The more data the model is trained on, the better it can understand language, context, and various topics.  LLM is typically based on deep learning architectures, particularly transformers, which have proven to be very effective for NLP tasks. The transformer architecture is designed to handle long-range inter-dependencies in text, meaning it can understand context and lexical relationship across large spans of text.  A trained LLM can be used to answer questions when given a prompt.  The prompt serves as a cue to recall many word associations at different ranges.  This kind of recall can reform sentences that actually make sense as answers to the given prompt.   

Now, imagine instead of training a deep learning model on books, websites, articles, we train them on mountains of schematics files, component library files, and 3D model files that Altium has access to, and train the LLM with these data to learn the relationship amongst the components, subsystems as well as the syntax that encodes them.   

How LLMs Work:

  • Training: During training, the model is shown large amounts of text and learns to predict the next word or token in a sequence. For example, given the sentence "The cat sat on the ___," the model learns to predict that the missing word is likely "mat." Over time, it learns complex patterns, such as grammar, vocabulary, and even world knowledge.
  • Inference/Generation: Once trained, LLMs can be used to generate new text or complete tasks based on input prompts. Given a user’s query, the model uses its learned knowledge to generate a relevant and coherent response.

Applications of Large Language Models:

  • Chatbots and Virtual Assistants: LLMs power conversational AI systems, like Siri, Alexa, or custom customer service bots.
  • Content Creation: They can assist in writing articles, generating creative text, or even code generation.
  • Translation and Localization: LLMs can translate text between different languages or adjust content to fit cultural contexts.
  • Text-Based Search Engines: LLMs can improve search results by better understanding the intent behind user queries.
  • Healthcare: LLMs can assist in medical diagnosis, summarizing patient histories, or answering healthcare-related questions.
  • Legal and Financial Analysis: LLMs can process legal documents or financial reports, helping with tasks like contract review or summarization.
  • Now Renesas can add automatic schematic and layout file generation to the list of successful LLM applications.


4.0 Recommended Next Step

To realize the proposed vision require strong understanding of the theory and application of deep learning and LLM.  It is therefore commended that Renesas first file a provision patent and then pursue the development of prototype with a reputable research university through grants or R&D contract.  Ideally Renesas should own the IP coming out of the R&D.  If the prototype is promising, incorporate the capability into Altium products.


  • No labels