2-1. Basics of μT-Kernel 3.0
Software configuration
Figure 2-1-1 shows the software configuration with μT-Kernel 3.0 as the OS.
Software is divided into application and system software in general.
An application is a program that achieves the feature required for a particular application, and in the case of embedded systems, it is the application that implements the product-specific features.
System software is an OS-centered program that allows applications to run on a microprocessor.
The system software consists of the following.
- OS
It is the core program of the system software. In this article, it is μT-Kernel 3.0. - Device drivers
Device drivers are software that controls I/O devices and exist for each I/O device. Device drivers can be operated from the application using the API of the device management function of μT-Kernel 3.0. - Subsystems
Subsystems are programs that extend the features of the OS. The main purpose of the subsystems is to implement middleware. Subsystems can be operated from applications using the API of the subsystem management function of μT-Kernel 3.0.
API and data type
Application Programming Interface (API) is an interface for invoking system software features from an application.
The API of μT-Kernel 3.0 is defined as a function in C language. Most APIs are C functions with names of the following form, called system calls.
tk_<operation>_<operation target>
For example, the API to create tasks is a C function called "tk_cre_tsk." “cre" represents the "create" operation, and "tsk" represents the object, a task.
In addition to system calls, library functions also exist in the API. Library functions can be distinguished by the absence of "tk" at the beginning of their names. The difference between the two is that system calls are executed internally by the OS, whereas library functions are executed in the context of a task. Library functions are used for low-level control at the hardware level, for example.
μT-Kernel 3.0 uses data types uniquely defined in μT-Kernel 3.0 for API arguments and return values.
The main integer data types defined in μT-Kernel 3.0 are shown in Table 2-1-1.
Dara type name | Explanation |
---|---|
B | Signed 8-bit integer |
H | Signed 16-bit integer |
W | Signed 32-bit integer |
D | Signed 64-bit integer |
UB | Unsigned 8-bit integer |
UH | Unsigned 16-bit integer |
UW | Unsigned 32-bit integer |
UD | Unsigned 64-bit integer |
INT | Signed integer (Size depends on CPU) |
UINT | Unsigned integer (Size depends on CPU) |
In addition to integer data, data types with specific meanings are also defined. The main data types other than integer types defined in μT-Kernel 3.0 are shown in Table 2-1-2.
Dara type name | Explanation |
---|---|
ID | ID number of Kernel object |
ATR | Attribute of Kernel object |
ER | Error code |
PRI | Priority |
TMO | Timeout length |
SZ | Size |
BOOL | Boolean value (TRUE: true, FALSE: false) |
Among the data types of μT-Kernel 3.0, the ER type indicates an error code. Most APIs return an error code as the return value.
The error code is a negative integer. The value of E_OK is 0, indicating that no error occurred. In other words, if the return value of an API that returns an error code is negative, it can be judged that an error occurred during API execution.
Kernel object
In μT-Kernel 3.0, the operation target is called an object. Since the name "object" can be used in a variety of ways, this article will refer to it as a kernel object to distinguish it from other objects.
A task, the execution unit of a program, is also a kernel object. Various kernel objects are also used for communication and synchronization between tasks, which will be explained later. Table 2-1-3 lists the kernel objects in μT-Kernel 3.0. Abbreviations in the table are those used for API names, etc.
Functional classification | Name | Abbreviation |
---|---|---|
Task | Task | tsk |
Synchronization and communication | Semaphore | sem |
Mutex | mtx | |
Event flags | flg | |
Message buffer | mbf | |
Mailbox | mbx | |
Memory management | Variable-size memory pool | mpl |
Fixed-size memory pool | mpl | |
Time event | Alarm handler | alm |
Cyclic handler | cyc |
Kernel objects have the following common rules.
- Kernel objects are created and deleted by the API
Kernel objects are always created by the API. In other words, to use a kernel object, it must first be created using an API for the creation. Kernel objects that are no longer in use are deleted using the API.
The API for creating kernel objects is named in the form tk_cre_XXX, and the API for deleting them is named in the form tk_del_XXX. The XXX part is the abbreviation of the target kernel object name. - Kernel objects are managed by ID numbers.
When a kernel object is created, it is assigned an ID number. The ID number is returned as the return value of the API that creates the kernel object. ID numbers are assigned automatically, and no specific value can be specified by the user.
To operate kernel objects with the API, specify the target kernel object using its ID number.
2-2. Basics of tasks
Task and task management function
A task is the basic unit of execution of a program. A program executed on μT-Kernel 3.0 is basically a set of multiple tasks, and an application is also a collection of multiple tasks. In addition to tasks, however, there are also execution units called handlers. Handlers are described in a later section.
In μT-Kernel 3.0, various operations related to tasks are performed via API. These operations are called task management functions. The main APIs for task management functions are listed in Table 2-2-1.
API name | Feature description |
---|---|
tk_cre_tsk | Create Task |
tk_sta_tsk | Start task (start task operation) |
tk_ext_tsk | Terminate invoking task |
tk_del_tsk | Delete task |
tk_exd_tsk | Terminate and delete invoking task |
Initial tasks and usermain function
Tasks are created programmatically using the API and begin execution. However, the very first task executed in the program, and only it is created by μT-Kernel 3.0 and begins execution.
This first task is called the initial task. The initial task is created and executed when μT-Kernel 3.0 is started and operates as follows.
-
System software initialization processing
Performs system software initialization processing such as device driver registration. -
Execution of the application
Executes the main function of the application and starts the application. The main function of the application is a function named usermain. The contents of the usermain function can be freely described according to the application.
The usermain function perform the initialization processing of the application, such as creating and executing tasks used by the application and creating other kernel objects. -
System software termination processing
When the application's main function ('usermain' function) completes, the system software is terminated. μT-Kernel 3.0 operation will also be terminated.
Note that when the 'usermain' function completes, the entire system software is terminated, even if other tasks, etc., are still running. Therefore, the 'usermain' function must not return during the application program execution.
Task priority
Specifies the initial priority of the task when it is created. After task creation, priority can be changed by API.
Task priority is the value used in task priority scheduling. Task priority is a positive integer starting from 1. The maximum value can be determined at the time of building μT-Kernel 3.0, but the specification requires that it be 16 or higher.
The smaller the value of the task priority, the higher the task precedence. In other words, priority 1 means the highest task precedence.
Task attributes
Task attributes represent the nature of that task. The task attribute is specified when the task is created and cannot be changed after the creation.
There are a variety of task attributes as shown in Table 2-2-2, and several can be specified at the same time.
The two attributes specified in a typical application task are the TA_HLNG and TA_RNG3 attributes. TA_HLNG attribute indicates that the task is written in C. TA_RNG3 attribute indicates that the task is executed at protection level 3 (application protection level).
Attribute name | Explanation | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
TA_HLNG | The task is written in high-level language (C Language). | ||||||||||
TA_ASM | The task is written in assembly language. | ||||||||||
TA_RNGn |
The task runs at protection level n. n is a value between 0 and 3, and has the following meaning.
|
||||||||||
TA_SSTKSZ | Specifies the system stack size | ||||||||||
TA_USERSTACK | Specifies the user stack area | ||||||||||
TA_USERBUF | Uses user-specified area for the stack | ||||||||||
TA_DSNAME | Uses the object name for debugging | ||||||||||
TA_FPU | Uses a floating point unit (FPU) | ||||||||||
TA_COPn | Uses co-processor n The range of n values is determined according to the specification of the microprocessor used. |
Task states
Each task has its own state and executes while having its state changed by the API.
The important task states are "DORMANT," "RUNNING," "READY," and "WAITING." Of these, RUNNING and READY are collectively referred to as the "ready for execution." The main states of the task and their transitions are shown in Figure 2-2-1.
The main states of the task are described below.
-
Dormant state (DORMANT)
The task has not yet been started or has completed execution and stopped. The code of the task in this state is not executed. Immediately after a task is created, it is in "DORMANT" state. A task in the "DORMANT" state is transitioned to the "ready for execution" state by the task startup API (tk_sta_tsk). -
"Ready for execution" state
Ready to execute a task. However, only one task can actually be executed at a time. Therefore, the task with the highest priority among the tasks in the "Ready for execution" state is in the "RUNNING" status. If there are multiple tasks with the same highest priority, the task that is "Ready for execution" first enters the "RUNNING" state.-
RUNNING state
A task is running. There is only one task running at one time. -
READY state
A task is ready to run, but is waiting to be executed because a task with higher precedent is in the running state.
Dispatching is the process of deciding which tasks to execute from among the "tasks ready for execution" according to their priority, and setting them to "running" status. Conversely, the action of suspending a "running" task and making it executable is called "preempt."
A task in the "running" state transitions to the "dormant" state by the Exit Task API (tk_ext_tsk). It also transitions to the waiting state through various APIs.
-
RUNNING state
-
Waiting state (WAITING)
This is a state in which the execution of a task is suspended, waiting for some condition to be satisfied. There are a variety of wait conditions. Specific examples will be given in subsequent sections.
When the wait condition of a task is satisfied, it transitions to the "ready" state.
There are two other task states: "NON-EXISTENT," which is the virtual state before the task is created, and "SUSPENDED." SUSPENDED is prohibited for use in applications and is not dealt with in this article.
2-3. Basic task synchronization
Task synchronization
Task synchronization is to match the timing of actions between multiple tasks. In μT-Kernel 3.0, the multitasking feature allows multiple tasks to be executed simultaneously, but if there is any dependency between these tasks, task synchronization is required.
μT-Kernel 3.0 has various features to realize task synchronization. Among them, the feature to synchronize tasks by direct manipulation of tasks is called a task synchronization function. The main APIs for task synchronization functions are listed in Table 2-3-1.
API name | Feature description |
---|---|
tk_dly_tsk | Delay task (suspend task operation) |
tk_wup_tsk | Request task wakeup |
tk_slp_tsk | Wait for task wakeup |
Suspend task operation
In a multitasking execution environment, it is important to ensure that once a task completes its own required processing, other tasks can execute promptly.
In such cases, the task's delay API (tk_dly_tsk) can be called to suspend the task's operation and cede execution rights to another task. The task that called the API will be in a state of waiting for the specified time to elapse.
Task wakeup
If you wish to synchronize the order of task processing, you can achieve this by using the task wake-up request API (tk_wup_tsk) and the task wake-up waiting API (tk_slp_tsk).
The following is a synchronization method that uses task wake-up.
Tasks to be processed later
A task that executes a chunk of processing later calls the task's wake-up waiting API (tk_slp_tsk) before the processing and transitions to the wake-up waiting state. The waiting state continues until another task wakes it up.
Tasks to be processed first
When the task that executes its own processing first completes executing, it calls the wake-up request API (tk_wup_tsk) to wake up the waiting task.
There are no constraints on the order in which the task wake-up waiting API and the task wake-up request API are called.
If the task wake-up request API has already been called when the task wake-up waiting API is called, execution will continue without transitioning the task to the waiting state.
The number of wake-up requests is counted. For example, if the wake-up request API is called twice before the wake-up waiting API is called, the two wake-up waiting APIs will continue execution without entering the wake-up waiting states. In other words, the task wake-up API and the task wake-up waiting API correspond to each other one-to-one.
Synchronization flow by task wake-up
The flow of synchronization by the task wake-up request API (tk_wup_tsk) and the task wake-up waiting API (tk_slp_tsk) is explained with an example below.
Consider the case where there are two tasks TASK-A and TASK-B and you want to execute the processing of TASK-B after the processing of TASK-A. TASK-A and TASK-B synchronize using the API as follows.
Operation of TASK-A
After TASK-A completes processing, it calls the task wake-up API (tk_wup_tsk) and requests TASK-B to wake up.
Operation of TASK-B
Before processing, TASK-B calls the task wake-up waiting API (tk_slp_tsk) and suspends operation in a wake-up waiting state until wake-up is requested by TASK-A.
Regardless of whether TASK-A or TASK-B calls the API first, in either case, TASK-B processing will be executed after TASK-A processing as shown below.
Figure 2-3-1 shows the case where TASK-B first calls the task wake-up waiting API (tk_slp_tsk).
- First, suppose TASK-B is in the running state and TASK-A is in the ready state (TASK-B is assumed to have higher precedence, with lower task priority number.).
- TASK-B transitions to the wake-up waiting state by calling the task's wake-up waiting API (tk_slp_tsk).
- TASK-A enters the running state.
- TASK-A calls the task's wake-up request API (tk_wup_tsk) and requests TASK-B to wake up.
- TASK-B is in the running state because the condition of wake-up waiting has been satisfied.
Next, Figure 2-3-2 shows the case where TASK-A calls the task's wake-up request API (tk_wup_tsk) first.
The flow in Figure 2-3-2 is described.
- First, suppose TASK-A is in the running state and TASK-B is in the ready state (TASK-A has the higher precedence, meaning lower task priority number.).
- TASK-A calls the task's wake-up request API (tk_wup_tsk) and requests TASK-B to wake up. Since TASK-A remains in the running state, TASK-B also remains in the ready state.
- TASK-A completes its execution and TASK-B enters the running state.
- TASK-B calls the task wake-up waiting API (tk_slp_tsk). At this time, since a wake-up request has already been made by TASK-A, the wake-up wait condition is satisfied and TASK-B continues execution.
2-4. Event flags
Event flags
Event flags are kernel objects in μT-Kernel 3.0 used primarily to control synchronization between tasks.
An event is certain programming information that can be represented as being present or absent. For example, "data ready" or "button pressed" are examples of events.
The presence or absence of an event can be represented by a single bit. An event flag is a collection of bits that represent the presence or absence of these events. Compared to the synchronization by task wake-up described in the previous section, event flags can convey more information and can be used for synchronization among multiple tasks.
Event flags are manipulated with μT-Kernel 3.0 API. The main APIs that control event flags are listed in Table 2-4-1.
API name | Feature description |
---|---|
tk_cre_flg | Create event flag |
tk_set_flg | Set event flag |
tk_clr_flg | Clear event flag |
tk_wai_flg | Wait for event flag |
tk_del_flg | Delete event flag |
Operation of event flag
An event flag is a collection of 1-bit data. The number of bits in one event flag is the bit width of the UINT type of that microprocessor, which matches the native bit size of the int type of the C language.
Each bit of the event flag can be set or cleared by the API. Set is to set a bit to 1 and clear is to set a bit to 0. The API also allows the task to wait until a specific bit of the event flag is set (Figure 2-4-1).
The basic steps for synchronization among tasks using event flags are described below.
- First, the event flag is created by the event flag creation API (tk_cre_flg).
- The task that signals the event occurrence sets the corresponding event flag bit by calling the event flag set API (tk_set_flg) when the event occurs. A single API can set multiple bits simultaneously.
- A task can wait for an event to occur by calling the event flag wait API (tk_wai_flg). Until the specified bit is set, the task enters a state waiting for the event flag and suspends its operation. If the specified bit is already set, the operation continues without entering the waiting state.
The task can wait for a bit to be set, but it cannot wait for a bit to be cleared. - When the task waiting for an event is released from the wait, it clears the bit of the event flag so that the next event can be received. The event flag can be cleared by calling the clear event flag API (tk_clr_flg), or it can be cleared automatically when the wait for the event flag is released by the option setting to the API.
Waiting for multiple events
The event flag wait API (tk_wai_flg) allows waiting for multiple bits in a single event flag simultaneously. The following conditions can be selected as wait conditions.
- AND wait: Wait until all specified bits are set
- OR wait: Wait until one of the specified bits is set
AND wait and OR wait cannot be combined.
Synchronization of multiple tasks by event flags
Synchronization between tasks by event flags can be done not only between tasks one-to-one, but also between multiple tasks.
Event flags can be set not only from a specific task but also from multiple tasks.
On the other hand, waiting for an event flag may or may not be done by multiple tasks, depending on the attributes used to create the event flag.
An event flag created with TA_WMUL attribute allows waiting by multiple tasks at the same time. This means that multiple tasks can wait for a single event to occur (Figure 2-4-2-a).
An event flag created with TA_WSGL attribute does not allow waiting by multiple tasks at the same time. If a task already exists in the waiting state and the event flag wait API (tk_wai_flg) is called, an error occurs (Figure 2-4-2-b).
2-5. Semaphores and mutexes
Semaphores
Semaphores are kernel objects in μT-Kernel 3.0 used primarily for mutual exclusion control of shared resources.
A semaphore contains a resource count indicating whether the corresponding resource exists and in what quantity. The resources count can be thought of as the number of tasks that can use that resource at the same time. Each semaphore is assigned a value for the resource count when it is created.
In most cases, the number of semaphore resources is 1. This means that the resource can only be used for one task at a time. Semaphores with one resource count are called binary semaphores.
Semaphores that have more than one resource count, i.e., that can be used by multiple tasks simultaneously, are called counting semaphores. Counting semaphores are not used in many cases, but they are used for external communication connections, for example.
Semaphores are manipulated with μT-Kernel 3.0 API. The main APIs that control semaphores are listed in Table 2-5-1.
API name | Feature description |
---|---|
tk_cre_sem | Create semaphore |
tk_wai_sem | Wait on semaphore |
tk_sig_sem | Signal semaphore |
tk_del_sem | Delete semaphore |
Mutual exclusion control by semaphore
The required steps for the mutual exclusion control of shared resources using semaphores are described below.
- Semaphore is created by the semaphore creation API (tk_cre_sem). As a general rule, semaphores and shared resources should be matched on a one-to-one basis. In other words, one semaphore is created for one shared resource.
- The task calls the semaphore's resource acquisition API (tk_wai_sem) to acquire resources from the semaphore before using the shared resources. Once a resource is acquired, the task can use that resource. If another task has already acquired the same resource, the task enters a state of waiting for semaphore resources and suspends its operation.
- When the task finishes using the acquired shared resource, it calls the semaphore's signal semaphore API (tk_sig_sem) to return the resource to the semaphore. If another task is waiting for semaphore resources for that semaphore, the task is released from the WAITING state. A task released from the waiting state acquires resources.
It is important that tasks that have acquired semaphore resources finish using the resources and return them as soon as possible to prevent other tasks that share the resources from becoming unable to operate unnecessarily too long while waiting for resources.
Example of mutual exclusion control by a semaphore
The following is an example of how mutual exclusion control between tasks by a semaphore works.
Suppose there are two tasks TASK-A and TASK-B and one resource shared between the tasks.
Mutual exclusion control of the resources shared by TASK-A and TASK-B is performed as follows (Figure 2-5-1).
- Two tasks TASK-A and TASK-B exist, and only TASK-B is running. Assume that the task precedence is higher (the task priority number is lower) for TASK-A.
- TASK-B calls semaphore's resource acquisition API (tk_wai_sem) to acquire resources.
- TASK-A enters the running state and TASK-B with lower precedence (the higher task priority value) enters the ready state.
- TASK-A calls the semaphore's resource acquisition API (tk_wai_sem).
- Since the resources have already been acquired by TASK-B, TASK-A is unable to acquire the resources and enters a waiting state for resource acquisition. Therefore, TASK-B is again in the running state.
- TASK-B calls semaphore's signal semaphore API (tk_sig_sem) to return resources.
- Since the resource has been returned, TASK-A acquires the resource and enters the running state. TASK-B with lower precedence (the higher task priority number) will be in ready status.
Mutexes
Mutexes are kernel objects in μT-Kernel 3.0 used primarily for mutual exclusion control of critical sections of tasks.
A critical section of a code is an execution path where problems may occur when multiple tasks execute it simultaneously.
The feature of the mutex is similar to that of the semaphore. However, while semaphores are intended for mutual exclusion control of shared resources in general, mutexes are specialized for mutual exclusion control of critical sections of tasks.
Mutexes are manipulated with μT-Kernel 3.0 API. The main APIs that control mutexes are listed in Table 2-5-2.
API name | Feature description |
---|---|
tk_cre_mtx | Create mutex |
tk_loc_mtx | Lock Mutex |
tk_unl_mtx | Unlock Mutex |
tk_del_mtax | Delete Mutex |
Mutual exclusion control by a mutex
The required steps for mutual exclusion control of a critical section using a mutex are described below.
- Create a mutex corresponding to the critical sections to be controlled exclusively.
- A task calls the mutex locking API (tk_loc_mtx) to lock the mutex before executing the critical section. If another task has already locked the mutex, the task enters a state waiting for the mutex to be unlocked and suspends its operation.
- When a task finishes executing the critical section, it calls the mutex unlocking API (tk_unl_mtx) to unlock the mutex. If another task is waiting for the mutex to be unlocked, the task is released from the waiting state. A task released from the waiting state locks the mutex.
It is important that the task that locked the mutex finishes executing the critical section and unlocks it as soon as possible to prevent other tasks that share the mutex from becoming inoperable unnecessarily too long while waiting it to be unlocked.
Differences between mutexes and semaphores
Mutexes and semaphores have similar features, but differ in the following ways.
- Mutex does not have a resource count setting. A semaphore for a critical section allows only one task to execute at a time. This feature is equivalent to that of a binary semaphore, and mutexes do not have feature of counting semaphores.
- Mutex is strongly tied to the task that locks it. Mutex can only be unlocked by a task that has locked it. Also, if a locking task is terminated while a mutex still locked, the mutex is automatically unlocked. In the case of semaphores, on the other hand, there is no such strong association with tasks, and the task that acquires the resource and the task that returns the resource may be different.
- Mutex has a feature to automatically change the priority of a task that has locked it to solve the problem of priority inversion that occurs generally in mutual exclusion control between tasks. This is a noticeable characteristic of mutex and is the most significant difference between mutex and semaphore.
2-6. Message buffers and mailboxes
Message buffers
Message buffers are kernel objects in μT-Kernel 3.0 used to communicate data between tasks.
Data communicated between tasks via message buffer is called a message. Messages are data of arbitrary size, and their content can be freely determined by the application.
The message buffer feature is to store messages sent from tasks and pass them on to other tasks. If multiple messages are stored in a message buffer, the messages are received in the order in which they are stored.
Message buffers are manipulated with μT-Kernel 3.0 API. The main APIs that control the message buffers are listed in Table 2-6-1.
API name | Feature description |
---|---|
tk_cre_mbf | Create message buffer |
tk_snd_mbf | Send message to message buffer |
tk_rcv_mbf | Receive message from message buffer |
tk_del_mbf | Delete message buffer |
Communication procedure via message buffer
The basic operation on the message buffer is sending and receiving messages (Figure 2-6-1).
The basic steps for communication between tasks using message buffer are described below.
- Message buffer is created by the message buffer creation API (tk_cre_mbf). At this time, the size of the message buffer and the maximum size of a single message are specified. The size of the message buffer is the size of the memory area in which messages are stored. No additional message can be stored beyond this size.
- The sending task calls the message sending API (tk_snd_mbf) and sends the message to the message buffer. The message is stored in the message buffer. If there are already stored messages at this time and there is no room in the message buffer, the task enters a state waiting to send a message and suspends operation. If there is free space in the message buffer, the sending task continues its operation continues without entering the waiting state.
- The receiving task calls the message receiving API (tk_rcv_mbf) and receives the message from the message buffer. If there are no messages in the message buffer at this time, the task enters a state waiting for a message arrival and suspends operation.
Communication between tasks via message buffer is basically asynchronous. However, if there is no room in the message buffer when a task sends a message or no message in the message buffer when a task tries to receive a message, the task enters a waiting state and synchronization between tasks is performed.
A special use of the message buffer is synchronous communication when a message buffer size is set to 0. If the size of the message buffer is 0, no message can be stored, so both sending and receiving are synchronized.
Mailboxes
Mailboxes are kernel objects in μT-Kernel 3.0 used to communicate data between tasks.
Data communicated between tasks via a mailbox is called a message. Messages are data of arbitrary size, and their content can be freely determined by the application. However, mailbox messages must allocate an area at the beginning of them for message headers used by the OS.
The mailbox feature is to pass messages sent from tasks to other tasks. It is similar to the message buffer already described, with the difference that the message buffer sends and receives the message data itself, while the mailbox sends and receives only the start address of the message. Therefore, when a task sends a message, the task allocates the necessary memory, and the receiving task must release the memory when the use of the message is finished. μT-Kernel 3.0 has a memory pool function that dynamically manages memory, and the mailbox and memory pool functions are usually used together.
Mailboxes are manipulated with μT-Kernel 3.0 API. The main APIs that control the mailboxes are listed in Table 2-6-2.
API name | Feature description |
---|---|
tk_cre_mbx | Create Mailbox |
tk_snd_mbx | Send Message to Mailbox |
tk_rcv_mbx | Receive Message from Mailbox |
tk_del_mbx | Delete Mailbox |
Communication procedure by mailbox
The basic operation on a mailbox is sending and receiving messages.
The basic steps for communication between tasks using a mailbox are described below (Figure 2-6-2).
- A mailbox is created by the mailbox creation API (tk_cre_mbf).
- The task that sends message prepares the message to be sent. The start address of the message is passed to another task, so it must allocate a memory area for the message in memory area that can be shared by other tasks. Normally, the memory pool management function is used to allocate memory blocks for messages.
- The sending task calls the message sending API (tk_snd_mbx) and sends the message to the mailbox. Only the start address of the message is actually sent.
- The receiving task calls the message receiving API (tk_rcv_mbx) and receives the message from the mailbox. Only the start address of the message is actually received. If there are no messages in the mailbox at this time, the receiving task enters a state waiting to receive a message and suspends operation.
- The receiving task releases the memory area for the received message after it no longer needs the received message. If the memory pool management function is used, the memory block is returned.
Merits and Demerits of Mailboxes
The advantage of the mailbox is that only the start address of a message is transferred between tasks, which is faster than the message buffer, which transfers all of the message data.
Also, since messages are not stored in a fixed memory area like a message buffer, it is unlikely the application runs out of space like a message buffer.
The disadvantage of mailboxes, on the other hand, is that the memory areas used for messages must be managed dynamically. When sending a message, the sending task allocates memory, and the receiving task must release memory when the message is no longer needed.
When using a mailbox, dynamic management of memory is required, which increases programming complexity, so if there are no particular problems with message communication speed, it is recommended to use a message buffer instead.
2-7. Alarm handlers and cyclic handlers
Alarm handlers
An alarm handler is a time event handler that starts at a specified time.
An alarm handler is a kernel object in μT-Kernel 3.0 and is a unit of program execution similar to a task. Alarm handlers have precedence over all tasks. In other words, when an alarm handler is started, execution of the task that was previously running is suspended. This operation is similar to that of the cyclic handlers and interrupt handlers described in later sections.
Alarm handlers are manipulated with μT-Kernel 3.0 API. The main APIs that manipulate the alarm handlers are listed in Table 2-7-1.
API name | Feature description |
---|---|
tk_cre_alm | Create Alarm Handler |
tk_sta_alm | Start Alarm Handler |
tk_stp_alm | Stop Alarm Handler |
tk_del_alm | Delete Alarm Handler |
Operation of an Alarm Handler
The operation steps of an alarm handler are described below (Figure 2-7-1).
- An alarm handler is created by the alarm handler creation API (tk_cre_alm).
- Prepare the alarm handler start operation by calling the alarm handler operation start API (tk_sta_alm), and specifies the time period until the alarm handler is activated. After the specified time has elapsed, the alarm handler starts.
Since alarm handlers have precedence over tasks, running task, if any, is suspended. - An alarm handler is started only once. However, by calling the alarm handler operation start API (tk_sta_alm) again, alarm handler operation can be started again.
- After calling the alarm handler operation start API (tk_sta_alm), if the alarm handler operation stop API (tk_stp_alm) is called before the alarm handler is started, the alarm handler will not be started at all.
Cyclic handlers
A cyclic handler is a time event handler that starts repeatedly at regular intervals.
A cyclic handler is a kernel object in μT-Kernel 3.0 and is a unit of program execution similar to a task. Like alarm handlers, cyclic handlers also have precedence over all tasks and the running task, if any, is suspended.
Cyclic handlers can be manipulated with μT-Kernel 3.0 API. The main APIs that control the cyclic handler are listed in Table 2-7-2.
API name | Feature description |
---|---|
tk_cre_cyc | Create Cyclic Handler |
tk_sta_cyc | Start Cyclic Handler |
tk_stp_cyc | Stop Cyclic Handler |
tk_del_cyc | Delete Cyclic Handler |
Basic operation of a cyclic handler
The operation of a cyclic handler depends on the attributes used to create it.
The basic operation of a cyclic handler and its handling steps are shown below (Figure 2-7-2).
- Creates a cyclic handler by using the cyclic handler creation API (tk_cre_cyc). Sets the cycle time interval "cyctim" for cycling handler execution.
- Calls the cyclic handler operation start API (tk_sta_cyc) to start cyclic handler operation. Thereafter, cyclic handler starts repeatedly at the time interval cyctim set at the time of creation.
Since the cyclic handlers have precedence over tasks, running task, if any, is suspended. - When the cyclic handler deactivation API (tk_stp_cyc) is called, the cyclic handler will stop working and will not be activated any more. After that, call the cyclic handler's operation start API (tk_sta_cyc) again, and the cyclic handler will resume its operation.
Notes on alarm handlers and cyclic handlers
Since alarm and cyclic handlers have precedence over all tasks, tasks cannot run while a handler is being executed. The handler must complete processing promptly so that the tasks can be executed.
The μT-Kernel 3.0 API that can be used from alarm handlers and cyclic handlers is limited. For example, alarm handlers and cyclic handlers cannot be in a waiting state like tasks, so APIs that may cause transition to a waiting state cannot be used in handlers.
2-8. Interrupt handlers
Interrupts and interrupt handlers
Interrupt is a microprocessor hardware feature that temporarily interrupts the processing of a running program and executes the processing of another program that has been specified in advance (Figure 2-8-1).
The interrupt factor (the cause of the interrupt) is determined by the hardware specifications of the microprocessor. There are various causes of interrupts, but a typical one is a notification from an I/O device to the CPU regarding the completion of input/output processing, etc.
The program to be executed when an interrupt occurs is called interrupt handler. An interrupt handler is registered for each interrupt factor. How to register this interrupt handler is also determined by the microprocessor hardware specifications.
Interrupt management
Although interrupts are a hardware feature and their detailed specifications differ for each hardware, the interrupt management function of μT-Kernel 3.0 allows various interrupt-related operations to be performed by using a common API. For example, interrupt handlers can be registered by using a common API, or interrupt handlers can be written as C functions.
However, interrupt itself is only a hardware feature, so the detailed behavior of the API for interrupt management function differs for each microprocessor hardware. In μT-Kernel 3.0, specification for functions such as interrupts that are highly dependent on hardware are called implementation specification, and implementation specifications are available for each microprocessor. In order to use the interrupt management function, it is necessary to refer to the implementation specification of μT-Kernel 3.0.
The main APIs of the interrupt management function of μT-Kernel 3.0 are listed in Table 2-8-1.
Except for tk_def_int, these are library functions, not system calls. These library functions directly control interrupt-related hardware in the microprocessor.
API name | Feature description |
---|---|
tk_def_int | Define Interrupt Handlers |
EnableInt | Enable Interrupts |
DisableInt | Disable Interrupts |
ClearInt | Clear interrupt occurrence factor |
SetIntMode | Set Interrupt Mode |
Define Interrupt Handlers
Many microprocessors require assembly language to write interrupt handlers. However, by using the interrupt management function of μT-Kernel 3.0, interrupt handlers can be written as C functions. It is also possible to use some of the μT-Kernel 3.0 APIs from interrupt handler programs.
Interrupt handlers written in C are not executed immediately by the interrupt hardware of the microprocessor, but are executed after going through a common interrupt handler in μT-Kernel 3.0. For interrupt handlers written in C, specify the TA_HLNG attribute.
On the other hand, interrupt handlers that are executed immediately by the microprocessor's interrupt hardware without going through the common interrupt handler of μT-Kernel 3.0 can also be used. For such interrupt handlers, specify the TA_ASM attribute.
The interrupt handler with the TA_ASM attribute must be created according to the specifications of the microprocessor's interrupt hardware and is usually written in assembly language. If the programmer wants to use the μT-Kernel 3.0 API in the interrupt handler with the TA_ASM attribute, the implementation specification of μT-Kernel 3.0 for that microprocessor must be understood and appropriate processing must be performed.
The original meaning of the TA_HLNG and TA_ASM attributes was that TA_HLNG was an interrupt handler written in a high-level language such as C, while TA_ASM was an interrupt handler written in an assembly language. However, in recent years, there are microprocessors, such as the Arm Cortex-M, which allow interrupt handlers to be written directly in C without an OS, and C compilers that provide features for writing interrupt handlers.
Therefore, now, the distinction between high-level language and assembly language is not essential, but whether or not to go through common processing in the OS is more significant.
Notes on interrupt handlers
Note that the interrupt handler is not a kernel object in μT-Kernel 3.0, but a hardware feature.
An interrupt handler interrupts the running normal program and executes preferably. In other words, it has precedence over all tasks.
In addition, the μT-Kernel 3.0 API that can be used inside interrupt handlers is limited. For example, interrupt handlers cannot be in a waiting state like tasks, so APIs that may cause transition to a waiting state cannot be used inside interrupt handlers.
Interrupt Control
When μT-Kernel 3.0 is started, most interrupts are set to the disabled state and these interrupts do not occur. To use interrupts, follow the steps below.
- Call the interrupt handler definition API (tk_def_int) to define an interrupt handler for the target interrupt factor.
- Call the interrupt mode setting API (SetIntMode) as necessary to set the interrupt mode. Depending on the hardware, in interrupt mode, the interrupt signal detection mode, etc. can be set, but the specific settings depend on the microprocessor hardware.
- Call the interrupt permission API (EnableInt) to allow the target interrupt to occur. Thereafter, this interrupt will occur if the conditions are met.
Interrupt handler must be defined before interrupts is enabled. When an interrupt occurs with no interrupt handler defined, μT-Kernel 3.0 executes the default interrupt handler. The processing content of the default interrupt handler depends on the implementation specification, but it usually performs error processing. - When an interrupt occurs, the interrupt handler defined in (1) is executed. In the interrupt handler, appropriate processing is performed in response to interrupt factors.
- In the interrupt handler, calls the API for clearing interrupt occurrence factors (ClearInt) to clear the interrupt occurrence factors. If this is not done, the interrupt may continue to be generated and the same interrupt handler may be called again when the interrupt handler is terminated.
- Interrupts can be disabled by calling the interrupt disable API (DisableInt).