You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Executive Summary

Imagine a computer can respond to a prompt like “design an energy storage device with a physical dimension of roughly 3”x3”x3” that can stored up to 12000mAH charged by USB-C PD charger for less than $50 USD of BOM”, and then output several sets of working schematics/layout files ranked according to a host of filterable factors such as power consumption, cost, dimension, and component suppliers.   

Far fetched?  With modern Deep Learning (DL) and Large Language Model (LLM), this is not a pipe-dream, but a reality.  Consider how Chat GPT is able to transform textual prompts into meaningful textual responses by training on vast amounts of text from books, articles, news.  There is no reason why LLM cannot be trained on a vast amount of schematics files, PCB files, 3D model files, and generate working schematics and PCB files when given textual prompts.  After all, schematics file, PCB files, components files and the relationships among them, are all encoded in structured, annotated XML files–a different form of language. 

Now, there are only a handful of companies possessing enough design data to train a robust LLM, not to mention the community and reach for sustained data collection, and Renesas has one of the leaders in Altium.  Therefore, Renesas must seize this golden opportunity to make this vision a reality.


Below are additional background information on Altium design file format generated for the most part by Chat GPT.  It serves to reminder the power of LLM to generate meaningful answers from prompt.


1.0  Altium schematic file is expressed as a language in XML

Altium schematics are captured as XML files with (.SchDoc) extension.  The file is structured to represent an electronic circuit containing various elements, such as components, nets, symbols, and annotations.  Below is an example illustrating Altium schematic file structure:

<AltiumSchematicDocument>
  <Header>
    <Version>1.0</Version>
    <Author>John Doe</Author>
    <Title>My Schematic</Title>
  </Header>
  <Components>
    <Component>
      <Designator>R1</Designator>
      <Type>Resistor</Type>
      <Value>10k</Value>
      <Pins>
        <Pin Number="1" Connection="NET1"/>
        <Pin Number="2" Connection="NET2"/>
      </Pins>
    </Component>
    <!-- More components -->
  </Components>
  <Nets>
    <Net Name="NET1">
      <Connection Pin="R1.1" />
      <Connection Pin="U1.2" />
    </Net>
    <!-- More nets -->
  </Nets>
  <Connections>
    <Wire Start="R1.1" End="U1.2"/>
    <!-- More connections -->
  </Connections>
</AltiumSchematicDocument>


2.0 Altium component libraries are expressed in XML

Altium schematics is really a graph relationship connecting components that are each in themselves an XML file.  Here is an example:

Component File Format
<AltiumComponentLibrary>
  <Library>
    <Component>
      <Name>R1</Name>
      <Type>Resistor</Type>
      <Attributes>
        <Value>10k</Value>
        <Footprint>Resistor_SMD_0805</Footprint>
      </Attributes>
      <Pins>
        <Pin Number="1" Type="Electrical" Direction="Input" />
        <Pin Number="2" Type="Electrical" Direction="Output" />
      </Pins>
      <Symbol>
        <Shape Type="Rectangle" Position="(0, 0)" Width="10" Height="10"/>
        <!-- More graphical elements defining the symbol -->
      </Symbol>
    </Component>
    <!-- More components -->
  </Library>
</AltiumComponentLibrary>


2.0  Large Language Model (LLM)

A Large Language Model (LLM) is a type of artificial intelligence (AI) model designed to process and generate human-like text based on vast amounts of data. LLMs are a subclass of natural language processing (NLP) models, which aim to understand, interpret, and generate human language. These models are "large" because they are trained on enormous datasets, often containing billions or even trillions of words, and they have millions to billions of parameters (the internal variables the model uses to make predictions).

LLMs are trained on diverse and extensive datasets, which can include books, websites, articles, and other text-based content. The more data the model is trained on, the better it can understand language, context, and various topics.  LLM is typically based on deep learning architectures, particularly transformers, which have proven to be very effective for NLP tasks. The transformer architecture is designed to handle long-range inter-dependencies in text, meaning it can understand context and lexical relationship across large spans of text.  A trained LLM can be used to answer questions when given a prompt.  The prompt serves as a cue to recall many word associations at different ranges.  This kind of recall can reform sentences that actually make sense as answers to the given prompt.   

Now, imagine instead of training a deep learning model on books, websites, articles, we train them on mountains of schematics files, component library files, and 3D model files that Altium has access to, and train the LLM with these data to learn the relationship amongst the components, subsystems as well as the syntax that encodes them.   

How LLMs Work:

  • Training: During training, the model is shown large amounts of text and learns to predict the next word or token in a sequence. For example, given the sentence "The cat sat on the ___," the model learns to predict that the missing word is likely "mat." Over time, it learns complex patterns, such as grammar, vocabulary, and even world knowledge.
  • Inference/Generation: Once trained, LLMs can be used to generate new text or complete tasks based on input prompts. Given a user’s query, the model uses its learned knowledge to generate a relevant and coherent response.

Applications of Large Language Models:

  • Chatbots and Virtual Assistants: LLMs power conversational AI systems, like Siri, Alexa, or custom customer service bots.
  • Content Creation: They can assist in writing articles, generating creative text, or even code generation.
  • Translation and Localization: LLMs can translate text between different languages or adjust content to fit cultural contexts.
  • Text-Based Search Engines: LLMs can improve search results by better understanding the intent behind user queries.
  • Healthcare: LLMs can assist in medical diagnosis, summarizing patient histories, or answering healthcare-related questions.
  • Legal and Financial Analysis: LLMs can process legal documents or financial reports, helping with tasks like contract review or summarization.
  • Now Renesas can add automatic schematic and layout file generation to the list of successful LLM applications.


3.0 Recommended Next Step

To realize the proposed vision require strong understanding of the theory and application of deep learning and LLM.  It is therefore commended that Renesas first file a provision patent and then pursue the development of prototype with a reputable research university through grants or R&D contract.  Ideally Renesas should own the IP coming out of the R&D.  If the prototype is promising, incorporate the capability into Altium products.


  • No labels