Uniform Driver Interface: Difference between revisions

From OSDev.wiki
Jump to navigation Jump to search
[unchecked revision][unchecked revision]
Content added Content deleted
(Avoid confusion about variable-length argument lists)
m (Bot: Replace deprecated source tag with syntaxhighlight)
 
(29 intermediate revisions by 3 users not shown)
Line 1: Line 1:
[[Category:Uniform Driver Interface]]
[[Category:Drivers]]
[[Category:Driver Interfaces]]

[[Image:Udi_color_330x220.jpg‎|thumb|alt=Logo|The official Project UDI logo]]
[[Image:Udi_color_330x220.jpg‎|thumb|alt=Logo|The official Project UDI logo]]


'''The UDI revival effort maintains an IRC channel on Freenode (irc.freenode.net), called #udi'''. Feel free to join and ask questions.
UDI stands for "Uniform Driver Interface". It is the specification of a framework and driver API / ABI that enables different operating systems (implementing the UDI framework) to use the same drivers. Conceived by several industry big players, it has fallen somewhat dormant, despite being functional and delivering on its promise.

UDI stands for "Uniform Driver Interface". It is the specification of a framework and driver API / ABI that enables different operating systems (implementing the UDI framework) to use the same drivers. Conceived by several large industry corporations, it has fallen dormant, despite being functional and delivering on its promise.


UDI drivers are binary compatible across all UDI-implementing operating systems running on the same CPU family. They are also source compatible across all UDI-implementing operating systems. This means, a driver only has to be developed once.
UDI drivers are binary compatible across all UDI-compliant operating systems running on the same CPU family. They are also source compatible across all UDI-implementing operating systems. This means, a driver only has to be developed once.


While Microsoft Windows gets all the hardware drivers they want, and GNU discourages UDI for [http://www.gnu.org/philosophy/udi.html philosophical reasons], its advantages for hobbyist OS developers are obvious.
While Microsoft Windows gets all the hardware drivers they want, and GNU discourages UDI for [http://www.gnu.org/philosophy/udi.html philosophical reasons], its advantages for hobbyist OS developers are obvious.
Line 9: Line 15:
==Why UDI?==
==Why UDI?==


The Uniform Driver Interface would, should it be widely adopted, provide a common driver framework for implementation across kernels and platforms, enabling drivers to be provided without respect to the target kernel, and to a large extent, the target hardware platform. UDI has several projected advantages over existing driver interfaces which may motivate the reader to adopt it:
UDI has several advantages over other existing driver interfaces which motivates developers to choose it above all others:


===Advantages===
* Portability (both cross-OS and cross-platform), which was mentioned in the above section, is perhaps the primary concern for which UDI was developed in the first place. All we can hope for is that enough operating systems will embrace the model so we can actually take advantage of it.
* Portability (both cross-OS and cross-platform), which was mentioned in the above section, is perhaps the primary concern for which UDI was developed in the first place. All we can hope for is that enough operating systems will embrace the model so we can actually take advantage of it.
* Performance is comparable or better than that of legacy drivers. Let's face it, performance is always important. UDI features a non-blocking model, besides the blocking one, a synchronization model for increased MP scalability and much more. UDI drivers have proven themselves over DDI drivers (and others).
* Performance is comparable or better than that of custom, native API drivers for a native UDI implementation. For environments where performance is critical, UDI does not inhibit service quality. UDI is explicitly designed to be non-blocking and lockless, featuring a synchronization model for increased MP scalability without locking and many other high-scalability focused features.
* Compatibility has also been taken into account. UDI environments can be implemented regardless of the OS architecture ([[Monolithic Kernel|monolithic kernel]] vs. [[microkernel]], POSIX vs. non-POSIX, etc.) with no extra overhead for any exotic design one might think of.
* UDI can integrate seamlessly into existing kernel environments regardless of the OS architecture ([[Monolithic Kernel|monolithic kernel]] vs. [[microkernel]], POSIX vs. non-POSIX, etc.) with little or no extra performance overhead.
* Stability is usually overlooked by the design and falls back to the implementation phase. UDI tries to eliminate some categories of potential bugs, such as (but not limited to) resource leaks and deadlocks.
* Reliability and stability have been explicitly provided for by the design. UDI tries to eliminate some categories of potential bugs, such as (but not limited to) resource leaks and deadlocks (all interfaces can potentially be implemented without any locking at all).
* Flexibility is another thing UDI has been designed mind with: not only in the way the specification was conceived (i.e., to be extensible), but also in the sense that it permits system programmers to apply techniques such as driver isolation, shadow drivers, etc. if they see fit to do so.
* Flexibility is another thing UDI has been designed mind with: not only in the way the specification was conceived (i.e., to be extensible), but also in the sense that it permits system programmers to apply techniques such as driver isolation, shadow drivers, etc. if they see fit to do so.
* The interface is fully asynchronous, in every respect; high scaling systems are becoming increasingly predominant and asynchronicity is slowly becoming an "expected" feature for modern kernels. UDI moves ahead of the herd to enable a compliant kernel to slowly adopt asynchronous interfaces without having to do major redesign later on.


==Description==
===Disadvantages===
* Moderately complex, and it will generally take a while to understand the specification.
* It cannot simply be ported to immature kernels. A kernel must have a certain minimum level of maturity to reasonably attempt to become UDI compliant.
* Not at all viable for "casual" projects: requires a significant amount of foreknowledge and prior work.

==Core components of UDI Drivers==


[[Image:Core_spec-8.gif‎|left|frame|alt=Environment|High level view of UDI environments]]
[[Image:Core_spec-8.gif‎|left|frame|alt=Environment|High level view of UDI environments]]


The OS layer that deals with UDI drivers is called an UDI environment. The reference implementation (see link below) puts quite a few environments for some of the more popular operating systems at your disposal (Linux, Mach, Darwin, Solaris and FreeBSD) - although some of them might be out of date. This is the layer you want to implement in order to enjoy UDI drivers. One thing environments are liable for is providing service calls. There are two types of service calls recognized by the UDI paradigm: synchronous (which will return immediately to the caller - i.e., to the driver) and asynchronous (which work through a callback mechanism).
An implementation of the Uniform Driver Interface specification is known as a UDI Environment. There is a reference implementation available (see link below) which provides usable code for several existing kernels (Linux, BSD, Solaris), and it can be used the basis for a fresh implementation. Kernel environment implementations are responsible for providing the Service Call interfaces specified by the UDI Standard; a kernel may choose to implement these as native system calls, or via library extensions -- the decision is up to the implementer. There are two types of service calls recognized by the UDI paradigm: synchronous (which will return immediately to the caller - i.e., to the driver) and asynchronous (which work through a callback mechanism).


Try to imagine the logical topology of an I/O system. It's a tree: you have one central node (perhaps the system board) which has several children (say, buses). Each of these buses may have several controllers attached. Since the tree can be more than 2 layers deep, each node needs to enumerate its children, which in turn will need to enumerate theirs, and so on. UDI drivers for these devices will interact in a tree-like fashion just as the hardware does. Let's take a closer look at drivers themselves!
UDI drivers also actively take part in identifying their child devices and helping to build the host kernel's device tree. Bus drivers enumerate children, and so on. Each of these buses may have several controllers attached. UDI drivers for these devices will interact in a tree-like fashion just as the hardware does. Let's take a closer look at drivers themselves!


Drivers are split into one or more modules. Once they run, they do so as driver instances: each device gets one logical instance. The reason I used the word "logical" is that it doesn't actually matter to the driver how the environment implements instances; if there are ''n'' SCSI devices of the same type installed on a system, there might only be one copy of the executable code in memory, yet ''n'' individual driver states (i.e., variables, open channels, etc.).
Drivers are split into one or more modules, and each module has at least once region. A driver that has been instantiated (executed, so to speak) uses IPC calls ("Channel operations") to communicate between modules and regions. If a driver is used to instantiate more than one device (say, a disk driver used to instantiate two separate disk devices), the choice of whether the actual driver code is mapped using Copy-on-Write, or duplicated in memory, etc is up to the environment.


===Regions===
===Modules===


A module is essentially a single executable code object. Specifically, drivers can be broken into multiple executables. A large driver that may only need to load certain components and may not need all of its code in memory all the time may be implemented as a multi-module driver. This partitioning of the driver code into modules is up to the driver vendor of course. Most UDI drivers are expected to be single-module drivers, but complex drivers such as graphics card drivers, etc may be best implemented as multi-module drivers. For example, if a graphics driver exports an OpenGL 3D API along with a Direct3D API, it is very likely that both front-ends have a lot of code behind them that would occupy a lot of memory should both be loaded. Most kernels will use ''either'' OpenGL ''or'' Direct3D, so if such a graphics driver was to split its OpenGL and Direct3D implementations into separate modules, this would enable kernels loading that driver to avoid allocating memory for the code and data for the API it isn't using.
Driver instances are divided into regions, the unit of concurrent execution in UDI. UDI regions location- and instance-independent, meaning that they can be moved from one place to another without affecting any of the other regions because they share no common state. This is particularly interesting in multiprocessor systems (esp. NUMA) because an environment may separate regions due to performance and resource constraints. They are concurrent in the sense that there can only be one thread running in a region at any given time. If there's still code running in region context while an asynchronous service call returns, the callback procedure is put on a queue. This helps avoid all sorts of locking mechanisms and isn't really a performance bottleneck since there can be more than once region per instance and more than once instance per driver running at the same time. Since regions don't share any state it's safe to say that running them in parallel won't cause any race conditions. It's worth mentioning that, because of the separate states, the tasks performed by regions are mutually-exclusive (for instance a network driver might have one region that handles sending packets and another receiving). This is exactly why there is no performance bottleneck.


===Channels===
===Regions===
Main article: [[ User:Gravaera/UDI_Regions | UDI Regions ]]


Regions are nothing more than blocks of related data. For example, a network card may have a set of register states that are specific to its send() function, and a different set of stats and variables specific to its receive() function. Data is explicitly separated into functionality regions in UDI. A region is nothing more than driver-allocated data for its state variables. The most intuitive way to split driver data is into functionality sub-components of the device in question. So a network card driver may choose to have a send region and a receive region. A graphics driver write may choose to partition the driver into a framebuffer writing region, a transformation region, etc, etc. Then IPC request messages can be sent over UDI IPC channels to each region based on the purpose of that region.
The only way for regions to communicate is through channels. Channels are a bi-directional communication mechanism. Each of the two channel endpoints provide an ops vector, which is a set of entry points. They are referenced via handles of type udi_channel_t (check the definition of handles below). The channel operations along with the associated functionality is defined by metalanguages. Metalanguages are separately defined for each class of drivers, but we'll get to that soon.


Regions also form the unit of concurrent execution in UDI. Since Regions are nothing more than data, they are also the units which must be synchronized against concurrent writes. Generally, this means making sure that no two threads can modify region data at once. The design of the UDI interfaces is perfectly capable of working without the use of locking, and it is left up to the host OS Environment to choose whether it will use lockless algorithms, spinlocks, waitqueues, or some other method to ensure that no two threads modify region data at the same time. See the main article for a detailed explanation of several practically usable UDI synchronization models.
All channel operation invocations have the following form:


Another attribute of UDI regions is that they are location- and instance-independent, meaning that they can be moved from one place to another without affecting any of the other regions because they share no common state. That is, a driver can be marshaled and moved from one NUMA node to another, or one physical machine to another over a network, or any other similar type of migration. This is particularly interesting in multiprocessor systems (esp. NUMA), and high-scaling compute clusters because an environment may separate regions due to performance and resource constraints. It's worth mentioning that, because of the separate states, the tasks performed by regions are mutually-exclusive (for instance a network driver might have one region that handles sending packets and another receiving). This is a potential area where host OSs can make huge optimizations to remove performance bottlenecks.
<source lang="c">void meta_op(meta_cbtype_cb_t *cb, /*, other arguments */);</source>


===Channels===
Where "meta" is a prefix specific to the metalanguage (e.g., udi or usbdi), "op" is the channel operation and "cbtype" is the control block type (read more on this below). Channel operations take zero or more parameters, depending on which operation we're talking about. The target channel is specified by the value of cb->gcb.channel.
Main article: [[ User:Gravaera/UDI_Channels | UDI Channels ]]


The only way for regions to communicate is through channels. Channels are an IPC-agnostic abstraction of a bi-directional communication mechanism. Each of the two channel endpoints provide an ops vector, which is a set of entry points. They are referenced via handles of type udi_channel_t (check the definition of handles below). The channel operations along with the associated functionality is defined by metalanguages. Metalanguages are separately defined for each class of drivers, but we'll get to that soon.
This will result in the environment running an operation in the other region. The convention is for the operation to have the following declaration:


===Metalanguages===
<source lang="c">static void ddd_meta_op(meta_cbtype_cb_t *cb /*, other arguments */);</source>
Main article: [[ User:Gravaera/UDI_Metalanguages | Metalanguages ]]


Metalanguages define extensions to the core specification for various purposes, and can also be used to define custom IPC protocol APIs between modules/regions. An example of a case where a custom protocol API may be needed is where, for example, a network card driver has a "'''Control'''" region which takes commands from the kernel for power management ("Go to sleep", "prepare to shutdown", etc), and then it has a '''Send'''() region and '''Receive'''() region, which handle its send() and receive() functions respectively.
The same observations as above apply, plus "ddd" being a driver-specific prefix.


It follows naturally that if the driver receives a "Go to sleep" command from the kernel on its Control region, it would need to send messages to its Send and Receive regions to cause them to cease operation. There is no generic IPC_Send() function defined for IPC across UDI channels -- all IPC must be done according to the protocols APIs defined by a Metalanguage, whether standardized by the UDI spec, or custom-defined. Thankfully, driver writers do not need to define custom protocols for every such case where they want to simply send custom messages between regions: the UDI Core specification defines a "'''Generic I/O Metalanguage'''" IPC protocol API which covers a wide range of generic IPC needs and can be extended with custom messages as desired.
===Metalanguages===


Apart from APIs/IPC protocols, Metalanguages also cover extensions to the core specification. For example, the already-defined UDI Bus/Bridge Metalanguage can be extended to support new buses as needed; PCI bus drivers, ISA bus drivers, etc do not all need new Metalanguages, because UDI has defined a core UDI Bus/Bridge Metalanguage. This core UDI Bus/Bridge Metalanguage can be extended using Bus/Bridge Metalanguage extensions specific to each bus. This is a case where a Metalanguage is already defined by the UDI Standard, and that metalanguage itself is extended as needed for each bus.
Metalanguages define the channel operations mentioned in the previous section. They usually have a parent-child relationship, according to the tree representation explained earlier. However, this does not always hold. One good example is the Management Metalanguage. This metalanguage deals with 3 channels:


Entirely new Metalanguages can also be created where necessary; for example, an SCSI Host Bus Adapter is not really a bus, but it is an I/O Microcontroller device that acts as a parent device to SCSI devices (mostly disks). It looks like a bus, but isn't really a bus, and is better handled with an IPC protocol and API of its own. So the UDI specification defines an SCSI Host Bus Adapter Metalanguage API which manages communication (IPC) between SCSI Peripheral Devices (disks) and SCSI Host Bus Adapters. On any given motherboard, a commonly seen arrangement may be as follows in the ASCII art below. The SCSI HBA is not a bus, and the IPC communication between SCSI disks and the SCSI HBA cannot be constrained to follow the same format as communication between a bus and its child devices. This is a case where a new Metalanguage API for communication is a good idea.
* The parent's management channel
* The child's management channel
* The child's bind channel


As an honourable mention, it would also have been possible to just use the UDI Generic I/O Metalanguage for communication between the SCSI disks and their parent SCSI HBA -- the Generic I/O Metalanguage is equally adequate for that purpose as well.
Don't worry if you don't understand the purpose of the management metalanguage yet. Everything will be explained in greater detail later; I was merely trying to point out this special kind of relationship that doesn't only include the parent and the child, it also includes the Management Agent.
<syntaxhighlight lang="text">
RootNode
|- PCI-Bus-0
| |- ...
| +- ...
|
|- PCI-Bus-1
| +- SCSI HBA
| |- SCSI-Peripheral-0 (disk)
| +- SCSI-Peripheral-1 (disk)
|
+- PCI-Bus-2
</syntaxhighlight>


Metalanguages are essentially UDI IPC Channel protocol definitions or API definitions, and definitions of extensions to the core specification. Hence the name: Meta-''LANGUAGES''.
===Configuration===


==Driver configuration==
There's a special configuration method for static properties of UDI drivers using a file called udiprops.txt. This file is distributed independently in each driver package for source code distributions and linked into a special section (called .udiprops) for binary distributions.
There's a special configuration method for static properties of UDI drivers using a file called udiprops.txt. This file is distributed independently in each driver package for source code distributions and linked into a special section (called .udiprops) for binary distributions.


Line 111: Line 139:
Of course, udiprops.txt can be a lot more complex than this, I only wanted you to see what one looks like. You should check the specification for all compile options, statements and configuration options.
Of course, udiprops.txt can be a lot more complex than this, I only wanted you to see what one looks like. You should check the specification for all compile options, statements and configuration options.


==Data objects==
==Programming Model==
All UDI function calls are asynchronous in nature; this means that they implicitly do not block. A ''compliant'' UDI driver will always be implicitly non-blocking. Whether or not the ''host kernel'' supports non-blocking programming models is up to that kernel, and for any particular kernel, it may be necessary to use locking, mutexes and blocking. Naturally, for a kernel that fully supports a non-blocking, asynchronous model, UDI will simply scale seamlessly.


UDI drivers, because of their asynchronous nature, behave like servers to a large extent and they have very good throughput, owing to the fact that the driver itself will only block if the host kernel imposes a limitation on it. For a host kernel which does not have scaling limitations, UDI drivers will innately also scale without limitations -- the throughput of a ''compliant'' UDI driver is dependent solely on the limitations of the host kernel.
There are several types of data we need to look at. First, there's modlue-global data - which resides in the .rodata section. The reason why this data is global is that it's read-only and thus won't cause any race conditions - remember that although only one thread of execution can be active per region, several regions of the same driver instance may run in parallel. Secondly, there's region-local and region-global data. Last but not least, function-local variables.


UDI drivers do not implicitly assume the use of locking, blocking, or any specific threading or synchronization model. They fit perfectly into any kind of host environment. As such, the UDI specification does not define any locking operations. It is completely possible for a host kernel to run UDI drivers locklessly.
Region-local data is data private to a region. Being private makes it okay to move the region to a different location or domain without affecting any of the neighbouring regions. Region-global is data that is not particular to a channel or operation.

Data objects get allocated via an UDI allocation interface. Let's take a look at the existing types of data objects.

===Control blocks===

Control blocks are a semi-opaque (i.e., the driver doesn't see the whole data object) data type that are used in metalanguage operations. Once a block is sent via a channel operation it may not be referenced until the channel operation completes. The same is true for asynchronous service calls - the control block will only become available once the callback returns. The way you need to think about this is that a block is owned by only one region at a time and you transfer it from one place to another.

There are several types of control blocks, the generic block type being udi_cb_t. All other types of control blocks are supersets of the generic control block. Also, there is the notion of control block groups - control blocks are categorized into groups of control blocks of the same size. Control blocks within the same group can thus be used interchangeably using casts.

Each control block may own a scratch space which is driver-specific and must be preserved across asynchronous and service calls. The driver can change the size for its control blocks' scratch spaces and if any of these are zero in size, their pointers must not be dereferenced.

<source lang="c">
typedef struct {
udi_channel_t channel;
void *context;
void *scratch;
void *initiator_context;
udi_origin_t origin;
} udi_cb_t;
</source>

===Handles===

Handles are opaque objects, meaning that the driver does not know their internal representation in the UDI environment. You can implement this as simple (void *)s, internally casting them as pointers into the correct context, or have abstract types.

==Initial state==

Drivers are relocatable objects files, they have no entry points. UDI drivers have only one global variable, udi_init_info that describes the primary module (and perhaps secondary modules), its primary region (and perhaps the secondary regions that it and the other modules have) and what control blocks it requests. udi_init_info is of type udi_init_t:

<source lang="c">
typedef const struct { // Don't forget that global variables are read-only
udi_primary_init_t *primary_init_info;
udi_secondary_init_t *secondary_init_list;
udi_ops_init_t *ops_init_list;
udi_cb_init_t *cb_init_list;
udi_gcb_init_t *gcb_init_list;
udi_cb_select_t *cb_select_list;
} udi_init_info;
</source>


==Driver failures==
==Driver failures==


When illegal behavior is detected by the environment, the misbehaving region will usually be region-killed and all neighouring regions will be notified. All channels to that regions will be closed and all resources owned by that region will be freed.
When illegal behavior is detected by the environment, the misbehaving region will usually be region-killed and all neighouring regions will be notified. All channels to that regions will be closed and all resources owned by that region will be freed.

==Metalanguages==

{{stub}}


==See also==
==See also==
Line 168: Line 154:
* Combuster's effort on creating a [[User:Combuster/UDI_Graphics|graphics metalanguage]]
* Combuster's effort on creating a [[User:Combuster/UDI_Graphics|graphics metalanguage]]
* Love4Boobies' page for several other [[User:Love4boobies|UDI drafts]]
* Love4Boobies' page for several other [[User:Love4boobies|UDI drafts]]

==Existing Implementations==
* [http://projectudi.sf.net/ Reference implementation] - Mostly targeted at linux
* [http://github.com/thepowersgang/acess2 Acess2] - Mostly complete implementation (with network support)
* [http://www.d-rift.nl/combuster/mos3/ MOS3]


==External Links==
==External Links==
Line 174: Line 165:
* [http://projectudi.sf.net/ Reference implementation]
* [http://projectudi.sf.net/ Reference implementation]
* [http://www.ties.org/deven/udi.html Deven Corzine's editorial]
* [http://www.ties.org/deven/udi.html Deven Corzine's editorial]
* [http://osr600doc.sco.com/en/UDI_dwg/CONTENTS.html UDI Driver Writer's Guide]
* [http://uw714doc.xinuos.com/en/UDI_dwg/dwg_code_top.html UDI Driver Writer's Guide]

Latest revision as of 04:58, 9 June 2024


Logo
The official Project UDI logo

The UDI revival effort maintains an IRC channel on Freenode (irc.freenode.net), called #udi. Feel free to join and ask questions.

UDI stands for "Uniform Driver Interface". It is the specification of a framework and driver API / ABI that enables different operating systems (implementing the UDI framework) to use the same drivers. Conceived by several large industry corporations, it has fallen dormant, despite being functional and delivering on its promise.

UDI drivers are binary compatible across all UDI-compliant operating systems running on the same CPU family. They are also source compatible across all UDI-implementing operating systems. This means, a driver only has to be developed once.

While Microsoft Windows gets all the hardware drivers they want, and GNU discourages UDI for philosophical reasons, its advantages for hobbyist OS developers are obvious.

Why UDI?

The Uniform Driver Interface would, should it be widely adopted, provide a common driver framework for implementation across kernels and platforms, enabling drivers to be provided without respect to the target kernel, and to a large extent, the target hardware platform. UDI has several projected advantages over existing driver interfaces which may motivate the reader to adopt it:

Advantages

  • Portability (both cross-OS and cross-platform), which was mentioned in the above section, is perhaps the primary concern for which UDI was developed in the first place. All we can hope for is that enough operating systems will embrace the model so we can actually take advantage of it.
  • Performance is comparable or better than that of custom, native API drivers for a native UDI implementation. For environments where performance is critical, UDI does not inhibit service quality. UDI is explicitly designed to be non-blocking and lockless, featuring a synchronization model for increased MP scalability without locking and many other high-scalability focused features.
  • UDI can integrate seamlessly into existing kernel environments regardless of the OS architecture (monolithic kernel vs. microkernel, POSIX vs. non-POSIX, etc.) with little or no extra performance overhead.
  • Reliability and stability have been explicitly provided for by the design. UDI tries to eliminate some categories of potential bugs, such as (but not limited to) resource leaks and deadlocks (all interfaces can potentially be implemented without any locking at all).
  • Flexibility is another thing UDI has been designed mind with: not only in the way the specification was conceived (i.e., to be extensible), but also in the sense that it permits system programmers to apply techniques such as driver isolation, shadow drivers, etc. if they see fit to do so.
  • The interface is fully asynchronous, in every respect; high scaling systems are becoming increasingly predominant and asynchronicity is slowly becoming an "expected" feature for modern kernels. UDI moves ahead of the herd to enable a compliant kernel to slowly adopt asynchronous interfaces without having to do major redesign later on.

Disadvantages

  • Moderately complex, and it will generally take a while to understand the specification.
  • It cannot simply be ported to immature kernels. A kernel must have a certain minimum level of maturity to reasonably attempt to become UDI compliant.
  • Not at all viable for "casual" projects: requires a significant amount of foreknowledge and prior work.

Core components of UDI Drivers

Environment
High level view of UDI environments

An implementation of the Uniform Driver Interface specification is known as a UDI Environment. There is a reference implementation available (see link below) which provides usable code for several existing kernels (Linux, BSD, Solaris), and it can be used the basis for a fresh implementation. Kernel environment implementations are responsible for providing the Service Call interfaces specified by the UDI Standard; a kernel may choose to implement these as native system calls, or via library extensions -- the decision is up to the implementer. There are two types of service calls recognized by the UDI paradigm: synchronous (which will return immediately to the caller - i.e., to the driver) and asynchronous (which work through a callback mechanism).

UDI drivers also actively take part in identifying their child devices and helping to build the host kernel's device tree. Bus drivers enumerate children, and so on. Each of these buses may have several controllers attached. UDI drivers for these devices will interact in a tree-like fashion just as the hardware does. Let's take a closer look at drivers themselves!

Drivers are split into one or more modules, and each module has at least once region. A driver that has been instantiated (executed, so to speak) uses IPC calls ("Channel operations") to communicate between modules and regions. If a driver is used to instantiate more than one device (say, a disk driver used to instantiate two separate disk devices), the choice of whether the actual driver code is mapped using Copy-on-Write, or duplicated in memory, etc is up to the environment.

Modules

A module is essentially a single executable code object. Specifically, drivers can be broken into multiple executables. A large driver that may only need to load certain components and may not need all of its code in memory all the time may be implemented as a multi-module driver. This partitioning of the driver code into modules is up to the driver vendor of course. Most UDI drivers are expected to be single-module drivers, but complex drivers such as graphics card drivers, etc may be best implemented as multi-module drivers. For example, if a graphics driver exports an OpenGL 3D API along with a Direct3D API, it is very likely that both front-ends have a lot of code behind them that would occupy a lot of memory should both be loaded. Most kernels will use either OpenGL or Direct3D, so if such a graphics driver was to split its OpenGL and Direct3D implementations into separate modules, this would enable kernels loading that driver to avoid allocating memory for the code and data for the API it isn't using.

Regions

Main article: UDI Regions

Regions are nothing more than blocks of related data. For example, a network card may have a set of register states that are specific to its send() function, and a different set of stats and variables specific to its receive() function. Data is explicitly separated into functionality regions in UDI. A region is nothing more than driver-allocated data for its state variables. The most intuitive way to split driver data is into functionality sub-components of the device in question. So a network card driver may choose to have a send region and a receive region. A graphics driver write may choose to partition the driver into a framebuffer writing region, a transformation region, etc, etc. Then IPC request messages can be sent over UDI IPC channels to each region based on the purpose of that region.

Regions also form the unit of concurrent execution in UDI. Since Regions are nothing more than data, they are also the units which must be synchronized against concurrent writes. Generally, this means making sure that no two threads can modify region data at once. The design of the UDI interfaces is perfectly capable of working without the use of locking, and it is left up to the host OS Environment to choose whether it will use lockless algorithms, spinlocks, waitqueues, or some other method to ensure that no two threads modify region data at the same time. See the main article for a detailed explanation of several practically usable UDI synchronization models.

Another attribute of UDI regions is that they are location- and instance-independent, meaning that they can be moved from one place to another without affecting any of the other regions because they share no common state. That is, a driver can be marshaled and moved from one NUMA node to another, or one physical machine to another over a network, or any other similar type of migration. This is particularly interesting in multiprocessor systems (esp. NUMA), and high-scaling compute clusters because an environment may separate regions due to performance and resource constraints. It's worth mentioning that, because of the separate states, the tasks performed by regions are mutually-exclusive (for instance a network driver might have one region that handles sending packets and another receiving). This is a potential area where host OSs can make huge optimizations to remove performance bottlenecks.

Channels

Main article: UDI Channels

The only way for regions to communicate is through channels. Channels are an IPC-agnostic abstraction of a bi-directional communication mechanism. Each of the two channel endpoints provide an ops vector, which is a set of entry points. They are referenced via handles of type udi_channel_t (check the definition of handles below). The channel operations along with the associated functionality is defined by metalanguages. Metalanguages are separately defined for each class of drivers, but we'll get to that soon.

Metalanguages

Main article: Metalanguages

Metalanguages define extensions to the core specification for various purposes, and can also be used to define custom IPC protocol APIs between modules/regions. An example of a case where a custom protocol API may be needed is where, for example, a network card driver has a "Control" region which takes commands from the kernel for power management ("Go to sleep", "prepare to shutdown", etc), and then it has a Send() region and Receive() region, which handle its send() and receive() functions respectively.

It follows naturally that if the driver receives a "Go to sleep" command from the kernel on its Control region, it would need to send messages to its Send and Receive regions to cause them to cease operation. There is no generic IPC_Send() function defined for IPC across UDI channels -- all IPC must be done according to the protocols APIs defined by a Metalanguage, whether standardized by the UDI spec, or custom-defined. Thankfully, driver writers do not need to define custom protocols for every such case where they want to simply send custom messages between regions: the UDI Core specification defines a "Generic I/O Metalanguage" IPC protocol API which covers a wide range of generic IPC needs and can be extended with custom messages as desired.

Apart from APIs/IPC protocols, Metalanguages also cover extensions to the core specification. For example, the already-defined UDI Bus/Bridge Metalanguage can be extended to support new buses as needed; PCI bus drivers, ISA bus drivers, etc do not all need new Metalanguages, because UDI has defined a core UDI Bus/Bridge Metalanguage. This core UDI Bus/Bridge Metalanguage can be extended using Bus/Bridge Metalanguage extensions specific to each bus. This is a case where a Metalanguage is already defined by the UDI Standard, and that metalanguage itself is extended as needed for each bus.

Entirely new Metalanguages can also be created where necessary; for example, an SCSI Host Bus Adapter is not really a bus, but it is an I/O Microcontroller device that acts as a parent device to SCSI devices (mostly disks). It looks like a bus, but isn't really a bus, and is better handled with an IPC protocol and API of its own. So the UDI specification defines an SCSI Host Bus Adapter Metalanguage API which manages communication (IPC) between SCSI Peripheral Devices (disks) and SCSI Host Bus Adapters. On any given motherboard, a commonly seen arrangement may be as follows in the ASCII art below. The SCSI HBA is not a bus, and the IPC communication between SCSI disks and the SCSI HBA cannot be constrained to follow the same format as communication between a bus and its child devices. This is a case where a new Metalanguage API for communication is a good idea.

As an honourable mention, it would also have been possible to just use the UDI Generic I/O Metalanguage for communication between the SCSI disks and their parent SCSI HBA -- the Generic I/O Metalanguage is equally adequate for that purpose as well.

RootNode
|- PCI-Bus-0
|  |- ...
|  +- ...
|
|- PCI-Bus-1
|  +- SCSI HBA
|     |- SCSI-Peripheral-0 (disk)
|     +- SCSI-Peripheral-1 (disk)
|
+- PCI-Bus-2

Metalanguages are essentially UDI IPC Channel protocol definitions or API definitions, and definitions of extensions to the core specification. Hence the name: Meta-LANGUAGES.

Driver configuration

There's a special configuration method for static properties of UDI drivers using a file called udiprops.txt. This file is distributed independently in each driver package for source code distributions and linked into a special section (called .udiprops) for binary distributions.

The udiprops.txt file doesn't only allow for static configuration options, but is also used in the building process for UDI drivers since they do not use makefiles - not that it would be technically unfeasible. The UDI specification defines tools for building, packaging and installing UDI drivers for simplicity's sake since, unlike POSIX tools, they don't require operating systems to have any extra functionality (e.g., a VFS). Luckly, these tools are available in the reference implementation, all you need to do is build them.

Below you can see a sample udiprops.txt:

  properties_version 0x101
  
  message 1 Project UDI
  message 2 http://www.project-UDI.org/participants.html
  message 3 Pseudo-Driver
  message 4 Generic UDI Pseudo-Driver
  release 3 1.01
  
  supplier	1
  contact	2
  name		3
  shortname	pseudod
  
  ##
  ## Interface dependencies
  ##
  requires udi	 	0x101
  requires udi_gio 	0x101
  
  ##
  ## Build instructions.
  ##
  
  module pseudod
  compile_options -DPSEUDO_GIO_META=1
  source_files pseudo.c pseudo.h
  region 0
  
  ##
  ## Metalanguage usage
  ##
  
  meta 1 udi_gio		# Generic I/O Metalanguage
  
  child_bind_ops 1 0 1		# GIO meta, primary region, ops_index 1
  
  # Orphan driver; no device line
  
  #
  # Initialization, shutdown messages
  #
  message 1100  pseudod: devmgmt_req %d
  message 1500  pseudod: final_cleanup_req

Of course, udiprops.txt can be a lot more complex than this, I only wanted you to see what one looks like. You should check the specification for all compile options, statements and configuration options.

Programming Model

All UDI function calls are asynchronous in nature; this means that they implicitly do not block. A compliant UDI driver will always be implicitly non-blocking. Whether or not the host kernel supports non-blocking programming models is up to that kernel, and for any particular kernel, it may be necessary to use locking, mutexes and blocking. Naturally, for a kernel that fully supports a non-blocking, asynchronous model, UDI will simply scale seamlessly.

UDI drivers, because of their asynchronous nature, behave like servers to a large extent and they have very good throughput, owing to the fact that the driver itself will only block if the host kernel imposes a limitation on it. For a host kernel which does not have scaling limitations, UDI drivers will innately also scale without limitations -- the throughput of a compliant UDI driver is dependent solely on the limitations of the host kernel.

UDI drivers do not implicitly assume the use of locking, blocking, or any specific threading or synchronization model. They fit perfectly into any kind of host environment. As such, the UDI specification does not define any locking operations. It is completely possible for a host kernel to run UDI drivers locklessly.

Driver failures

When illegal behavior is detected by the environment, the misbehaving region will usually be region-killed and all neighouring regions will be notified. All channels to that regions will be closed and all resources owned by that region will be freed.

See also

Existing Implementations

External Links