2. ASTRA Concepts

ASTRA is an Agent-Oriented Programming (AOP) language that is based on the AgentSpeak(L) theoretical language proposed by Arnund Rao in 1996. This section of the guide will take you through the basic concepts of AgentSpeak(L) and explain how they are mapped to ASTRA.

2.1 Introduction to AgentSpeak(L)

2.1.1 What is AgentSpeak(L)?

AgentSpeak(L) is an agent programming language, that is based on Belief-Desire-Intention (BDI) theory. This theory models rational decision making as a process of state manipulation. The idea is that the current state of the world can be represented as a set of beliefs (basically facts about the state of the world) and the ideal state of the world can be represented as a set of desires. The theory then maps out how a rational decision maker achieves its desires – that is, how it changes the world so that its desires become its beliefs. For instance, a decision maker may believe that it is in Ireland, but it may also have a desire to be in China. BDI theory attempts to explain how the decision maker selects some course of action so that it eventually believes that it is in China, thus satisfying its desire to be in China.

The way in which BDI theory achieves this is by adding a third state – intentions – defined as a subset of desires that the decision maker is committed to achieving . Why a subset? Basically, in BDI theory, it is considered acceptable for desires to be mutually inconsistent. That is, an agent can have two desires that cannot be realised at the same time. For example, in addition to desiring to be in China, our example decision maker may also desire to be in France. The problem is that there is a physical constraint on the achievement of the desires – the decision-maker cannot be in two places at the same time – so it can only satisfy one of its desires at a time. This issue can be generalised to the idea that decision-makers are resource bounded entities and may not be able to achieve all their desires due to a lack of sufficient resources. As a result, they must select a subset of those desires that they “believe” they are capable of realising given their resource constraints – these are their intentions. Once selected, the decision maker attempts to make its intentions into beliefs by identifying and following an appropriate course of action. The identification of this course of action, known as a plan, can be based on selection of the plan from a library of pre-written plans or through the use of a planner that constructs the plan on the fly (beliefs are the start state and intentions are the end/goal state). AgentSpeak(L) adopts the former of these approaches (a plan library).

There are two further refinements to BDI theory. First, the concept of a goal is often introduced as a replacement for desires. Goals are defined as a mutually consistent set of desires (so the decision maker could desire to be in both China and France, but could only have a goal to be in one of those places). AgentSpeak(L) adopts goals as the representation of future state. The second refinement is the idea of how to represent intentions. In the pure model, intentions are a subset of desires, but intentions are associated with commitment. This implies that a decision maker has some “plan of action” for achieving its intentions. As such, it is possible to represent intentions as either state (intention-to-achieve) or as the plan that will bring about that state (intention-to-do). AgentSpeak(L) adopts the latter model of intention (intention-to-do).

AgentSpeak(L) defines a set of programming constructs, encoded using a specific syntax and supported by a corresponding interpreter algorithm that is based on a Belief-Goal-Intention(to-do) model of rational decision making. The core constructs provided are:

  • Beliefs: predicate formulae representing facts about the state of the agents environment. Together, the set of beliefs held by an agent are equivalent to the state of an object.

  • Goals: predicate formulae (prefixed with a bang operator (!)) identify what the agent wants to do. Goals are not stored explicitly in the agent state, instead, they are declared as required and mapped contextually to a behaviour that will realise the goal. Goals are equivalent to method calls in object-oriented programming. The mapping is achieved through the use of events and associated event handler, known as plan rules.

  • Events: Events drive the behaviour of an agent. Internally, the agent contains an event queue. Each iteration of the interpreter, one event is selected from the event queue and processed through contextual mapping of the event to an event handler (plan rule). AgentSpeak(L) includes events for: the adoption of new beliefs, the retraction of existing beliefs, and the adoption of goals. There is no analogy between events and object-oriented programming – perhaps the closest concept they map on to is the message that is received by an object (which is matched against one of the methods supported by the object).

  • Plan Rules: Plan rules are the heart of an agent program; they define the core behaviours of the agent, contextually mapping those behaviours to the events that trigger them. Behaviours are specified as a sequence of plan operators; which support the following functionality: belief adoption/retraction; sub goal adoption; belief querying and private actions. Plan rules are equivalent to methods in object-oriented programming, where the triggering event is equivalent to the method signature and the behaviour is equivalent to the method implementation.

The core interpreter cycle for AgentSpeak(L) can be reduced to the following steps:

1. select an event, e, from the agents event queue

2. match the event to a plan rule, p whose triggering event matches e, and whose context is satisfied.

3. if the event is a belief adoption / retraction event, then create a new intention to process the
   behaviour specified in p else update the intention that generated e to also process the behaviour 
   specified in e.

4. select one intention, i and execute its next step.

5. return to 1

2.1.2 "Hello World" with AgentSpeak(L)

As a first example of AgentSpeak(L) we present the basic hello world program. This program consists of two statements: an initial goal (line 01) and a plan rule (lines 03-04). As can be seen, statements are terminated by a period (,). The first statement declares a goal, !init(). This goal results in a goal adoption event being added to the agents event queue. This is only done once before the first iteration of the interpreter. The second statement is a plan rule. This rule is designed to handle the goal adoption event generated by the first statement. To specify a goal adoption event, the goal is simply prefixed by a + operator. The arrow (<-) operator is used to separate the triggering event from the behaviour implementation (which is on line 04). The behaviour contains a single plan operator – a private action that prints out the argument to the console.

01 !init().
02 
03 +!init() <-
04     println(hello world).

In terms of execution: on the first iteration, the interpreter selects the +!init() event; matches this event to the rule; and creates an intention to execute the behaviour associated with the rule. Next, the interpreter selects this newly created intention and executes the next step, which in this case involves “hello world” being printed out (this is an example of a private action). Upon completion of the step, the intention is marked as completed, and dropped. The agent continues to execute, but it never generates another event. This means that it never adopts another intention, which in turn means that the program does nothing more.

2.1.3 Declaring and Handling Subgoals

The second example program illustrates the use of goals (and in particular, subgoals) in AgentSpeak(L) programs. The program itself is a slightly modified version of the Hello World program that moves the code to print out “Hello World” into a subgoal.

01 !init().
02 
03 +!init() <-
04     !printHello().
05
06 +!printHello() <-
07     println(hello world).

This program is a slight modification of the previous program where the print action is moved to a separate rule that is used to handle the adoption of the !printHello() goal (lines 06-07). This goal is invoked as a subgoal on line 04 of the program (in the previous program, this line contained the actual print action),

In terms of execution, the following happens: on iteration 1, the agent removes the +!init() goal adoption event from the event queue and matches it to the first rule (lines 03-04), causing an intention to be created. This intention is then selected by the agent and the first step is executed. This step is a subgoal plan operator, which has the effect of creating a +!printHello() goal adoption event. Because it is a subgoal, the intention is also suspended (this means that the intention cannot be selected for execution). The goal adoption event also includes a reference to this intention, indicating that the event corresponds to a subgoal. On iteration 2, the agent removes the +!printHello() goal adoption event from the event queue and matches it to the second rule (lines 06-07). Because the event was generate by a subgoal plan operator, the agent appends the plan part of the rule to the intention from which the subgoal was invoked, and resumes that intention. The result of this is that the agent has a single intention that combines the first and the second rules. This is achieved by making an intention a stack. Each element of the stack contains a plan body and a program counter to indicate what step of that plan body is next. In this example, after the second event is handled, the intention contains 2 elements: an entry that represents the body of the !init() rule (lines 03-04) with a program counter indicating that the first step has been completed; a second entry then represents the body of the !printHello() rule (lines 06-07) indicating that no steps have been completed. The second entry is at the top of the stack. This intention is then selected by the agent, and the next step is executed.In this case, the agent peeks at the top of the stack and executes the first rule of the second entry (which calls the print action). On the 3rd iteration, the agent has no new events to process, so it simply selects the intention and executes the next step. When it peeks at the top entry in the intention, it notes that the entry is completed, so it removes that entry and then peeks at the new top entry. Again, the agent notices that this entry is also complete, so it removes the second entry, leaving the stack empty. This indicates to the agent that the intention has been completed, so it is dropped.

2.1.4 Managing Beliefs with AgentSpeak(L)

This final example illustrates how AgentSpeak(L) permits the modification of the agents internal state through the belief update plan operators. One operator is provided to support the addition of new beliefs and a second operator is provided to support the removal of existing beliefs. No operator is provided for the modification or an existing belief (this is achieved through the retraction of the existing belief and the subsequent adoption of the new belief).

01 light(on).
02
03 +light(on) <-
04     println(the light is on, turn it off!);
05     -light(on);
06     +light(off).
07     
08 +light(off) <-
09     println(the light is off, turn it on!);
10     -light(off);
11     +light(on).

This program includes an initial belief, representing the fact the a light is on, and two rules. The triggering event of the first rule (lines 03-06) is the event that the agent adopts a belief that the light is on (like the initial belief). The body of the rule consists of a sequence of three actions: (1) It prints out a message to the console, (2) it retracts the belief that the light is on, and (3) it adopts a belief that the light is off. The second rule (lines 08-11) does the opposite of this – it retracts the belief that the light is off and adopts the belief that the light is on. In terms of behaviour, this program implements an infinite loop, where either the first rule or the second rule is executed on each iteration. In fact, the last operation of each rule generates the event that triggers the next rule.

In terms of execution, the following happens: on iteration 1, the agent adopts the belief that the light is on and adds the associated belief adoption event to the event queue. The agent then selects that event from the event queue and handles it by matching the event with the first rule (lines 03-06) and adopting a new intention that contains the associated plan. The agent then selects this intention for execution and executes the first step of the plan which prints out the “the light is on, turn it off”. On iteration 2, the agent does not select an event because the event queue is empty. It does, however select the intention again, this time executing the second step of the plan, which causes the belief light(on) to be dropped. This action has the side effect of generating a belief retraction event that is added to the agents event queue. On the 3rd iteration, the agent selects the belief retraction event from the event queue and attempts to match it against rule. No matching rule exists, so this event is ignored (in some implementations the event queue is filtered so that this type of event is never added as it can never affect the behaviour of the agent). The intention is selected for a third time, and the last step is executed, resulting in the belief light(off) being adopted. Again, this has a side effect – namely the generation of a belief adoption event, which is added to the event queue. At the end of this iteration, the intention is marked as completed and dropped. On the 4th iteration, the agent selects the belief adoption event and matches it to the second rule (lines 08-11). This results in the adoption of a new intention that contains the associated plan. The agent selects this intention, and executes the first step, which results in the following message being printed to the console: “the light is off, turn it on”. Over the next two iterations, the agent drops the belief light(off) and adopts the belief light(on), resulting in first rule being triggered and the behaviour described on iterations 1-3 is repeated. this behaviour finishes on iteration 9,resulting in the behaviour of iterations 4-6 being repeated and so on. In fact, the overall behaviour of the agent is an infinite loop where the agent prints out the statement that the “light is on…” followed by the statement that the “light is off…” repeatedly.

2.1.5 Recap

AgentSpeak(L) is an event driven language. Two main types of event exist: beliefs events and goal events. Belief events are added whenever the beliefs (state) of the agent change. Belief events are generated for the addition of new beliefs or the dropping of existing beliefs. Goal events are added when the agent is created (initial goals) or when the agent reaches a decision point. The idea of a goal is to indicate what you want to happen next without having to specify how it will be done.

Agents make decisions by processing events. Events are processed in the order they arrive. Processing of events involves matching the event to a plan that is applicable in the current context. A plans applicability is determined by a context condition that is a bit like a guard in an if statement. If the context is true with respect to the current beliefs (state) of the agent, then the rule is applicable, otherwise it is not. When processing an event, the agent identifies all plans whose triggering event matches the event being processed. Those plans are then filtered for applicability (we remove any whose context is not true) and a single plan is chosen from the remaining. Typically, this is determined based on the order in which the plans were written in the agent program (nearer the start of the program = higher priority).

Once a plan is selected, the agent must execute it. This is done by adopting a new intention or refining an existing intention. In the case where the event is a belief event or an initial goal event, a new intention is created. In the cases where the event is a subgoal event, the plan is added to the existing intention. At any point in time, an agent can have multiple intentions. Intentions are executed in parallel. On each iteration of the agent interpreter, a single intention is selected (if one exists) and the next step of the intention is executed.

2.2 Translating AgentSpeak(L) into ASTRA

ASTRA is based upon AgentSpeak(L) in that it provides all of the same basic functionality as AgentSpeak(L) but then augments this basic functionality with a range of additional features that we believe result in a more pratical Agent Programming Language. Many of the features of the language will be introduced over the coming lessons, here we attempt to provide a direct mapping from AgentSpeak(L) to the equivalent ASTRA functionality.

2.2.1 Types in ASTRA

Unlike most logic-based languages, ASTRA is strongly typed. It includes a number of primitive types that correspond to the primitive types in Java. The only difference is that the Java array type is replaced by a list type in ASTRA. This reflects the logical nature of ASTRA, where lists are considered a core type of object.

TYPE DESCRIPTION EXAMPLE LITERALS
int integer value that maps onto the java.lang.Integer class and the int Java data type. 5, -11, 127
long integer value that maps onto the java.lang.Long class and the long Java data type. 55l, -225l, 12954l
float real value that maps onto the java.lang.Float class and the float Java data type. 12.3f, -4.567f
double real value that maps onto the java.lang.Double class and the double Java data type. 9.876, -1234.5678
char character value that maps onto the java.lang.Character class and the char Java data type. 'a', '3', '@'
boolean boolean value that maps onto the java.lang.Boolean class and the boolean Java data type. true or false
string string of characters that maps onto the java.lang.String class. "animal", "feel", "12\t34"
list list of values that maps onto the astra.term.ListTerm class which itself extends java.util.List. [1, 2, 3, 4], ["the", 4, 'a', true]
object_ref the type is a Java class name and the value is an object reference (See later for examples) no literal form

2.2.2 Declaring an Agent

ASTRA programs are designed to be familiar and the syntax is based on Java. Agent programs are written in a text file that has a ".astra" extension and are declared using the agent keyword followed by an identifier (that should match the name of the file that the program is written in) and the program itself, which is enclosed in a set of braces.

agent MyFirstAgent {

}

As can be seen in the snippet of code above, the agent program name follows the same conventions as Java (camel notation with a capitalised first letter). This program should be stored in a file called MyFirstAgent.astra. Within the braces, you are able to declare initial beliefs and goals, plan rules, inference rules, and other ASTRA specific constructs.

2.2.3 Beliefs

Beliefs are basically the same as in AgentSpeak(L) with the exception that in ASTRA except for the fact that they are typed (see the Types in ASTRA section for a list of valid types). This means that each argument of a predicate has a type and, when used, the value in each argument must match the associated type. Failure to use the appropriate type results in a syntax error at compile time.

To reduce confusion and to inform the compiler of the set of beliefs that we expect to see in the program, ASTRA includes a types construct. This construct can be used to define sets of belief templates that are used by the compiler to check for incorrect type usage. An advantage of introducing this model is that it forces the developer to think about what beliefs they will use to model the agents environment before they implement behaviours. As a simple illustration, consider an agent that is monitoring a combination of a light switch and a light. The light switch can be in an up or down position, and the light can be on or off. We could model this as two types beliefs as is shown below:

types lightExample {
    formula switch(string); // the argument should be "up" or "down"
    formula light(string); // the argument should be "on" or "off"
}

The initial state of the agent can be declared using the initial keyword. For example, if the initial state of the light system described above was the light off and the switch up, then this could be modelled as the initial beliefs:

initial switch("up");
initial light("off");

Alternatively, both statements can be combined as follows:

initial switch("up"), light("off");

2.2.4 Goals

Goals are treated in a similar way to beliefs, with the exception that goals do not require templates. This means that you can use any formula to define a goal. Within the AOP community, there is some debate around this. Goals that can be mapped to beliefs are often called declarative goals, however, AgentSpeak(L) and hence ASTRA does not force this, and also allows non-declarative goals. Declarative goals declare a belief state you want to acheive. If we use the light example from the beliefs section, then the goal to turn the light on can be declared declaratively as !light("on") or non-declaratively as !turnLight("on") (there are other possible forms for the non-declarative version - !foo("on") would also be a valid non-declarative goal).

As with beliefs, the initial goals of an agent can be declared using the initial statement. For example, to give an agent an initial goal to turn the light on, we could write:

initial !light("on");

2.2.5 Events

ASTRA supports the same event types as are defined in AgentSpeak(L): belief and goal events. Again, just like beliefs and goals, events are typed. For example, the event that corresponds to the adoption of the belief light(“on”) is +light(“on”) and the event that corresponds to the retraction of that a belief is -light(“on”). Similarly, the event that corresponds to the adoption of the goal !printState(“alive”) is +!printState(“alive”). We will discuss the impact of typing on triggering events in the Plan Rules section.

In addition to the default event types, ASTRA also supports the creation of custom events which can be defined by the developer. Details of how to create custom events will be described here. Examples of custom events that you can use in ASTRA include: EIS and CArtAgO events (environment events), and ACRE events (conversation management events). A custom event type is also provided for messages. This second new event type will be introduced in section aaaa.

2.2.6 Plan Rules

In ASTRA, plan rules are defined using a Java-style syntax which is quite different to the more logic style used in AgentSpeak(L). Specifically, the plan body part of a rule is implemented as a code block which itself is made up of a sequence of primitive statements, control flow statements, or sub-blocks. Primitive statements are terminated by a semi-colon (;). An abstract illustration of this can be seen below:

rule te : ctxt {
    Ac1;
    Ac2;
    {
        Ac3;
        Ac4;
    }
}

Notice that in the above example, the rule keyword is used to declare that we have a rule. this is followed by a triggering event (te); and the plans context (ctxt), which is optional and may be omitted when it is true. The plan body is defined as a code block that consists of a sequence of three statements: 2 primitive statements Ac1 and Ac2, and a code sub-block that contains two more (sequentially executed) primitive statements Ac3 and Ac4. This syntax is the standard syntax for any C-derived programming language, including Java.

In ASTRA, triggering events typically contain typed variables that are matched against the arguments of specific events. For example, a triggering event to match the adoption of the belief that the light has changed state would be: +light(string state).

Lets explore plan rules in a little more detail. Consider the situation where we want our agent to be able to turn on the light. The most natural way to express this is through a declarative goal: !light("on"). To get the agent to attempt to achieve this goal, we need a rule with a triggering event: +!light("on"). The expected behaviour of this rule is that the agent have the belief that the light is on: light("on").

rule +!light("on") {

}

In terms of implementing this behaviour, it turns out that there are two scenarios:

  1. That the light is already on, so the goal is achieved by default
  2. That the light is off.

When we program the agent, we need to cater for both scenarios. Luckily, the first scenario is easy to express:

rule +!light("on") : light("on") {}

The second scenario is a little more difficult because we need to do something to turn the light on:

rule +!light("on") : light("off") {
    // turn on the light
}

The simplest solution here is to simply update the state of the agent to reflect the target state of the goal:

rule +!light("on") : light("off") {
    -light("off");
    +light("on");
}

This rule deals with the light being turned on. We can write a similar rule to handle the goal to turn the light off.

rule +!light("off") : light("on") {
    -light("on");
    +light("off");
}

All three rules can be joined together and simplified somewhat:

rule +!light("on") : light("on") {}

rule +!light("on") {
    -light("off");
    +light("on");
}

rule +!light("off") {
    -light("on");
    +light("off");
}

Specifically, note how we have been able to drop the context on the second and third rules. This is because the first rule captures the situation where the goal has already been achieved (i.e. it is a belief of the agent), so the remaining rules assume that the goal is not achieved.

Of course, none of these rules consider the state of the switch - the agent directly manipulates the state of the light. We leave a study of how to model turning the light on through the use of the switch to a later guide on Designing Agent Programs with ASTRA.

2.3 Rewriting AgentSpeak(L) Programs in ASTRA

2.3.1 Hello World

The first program was a basic hello world program for AgentSpeak(L). The ASTRA equivalent is:

01 agent Hello {
02     module Console console;
03
04     initial !init();
05 
06     rule +!init() {
07         console.println("hello world");
08     }
09 }

This program is semantically identical to the AgentSpeak(L) program, but is longer. The reason for this additional length is that ASTRA is more verbose and also, ASTRA does not hide as much. In ASTRA, you must explicitly declare any libraries that you want to use. These libraries are implemented as modules that contain annotated methods that can be called from the agent code. In the above example, we use the astra.lang.Console library, which agents can use to read to / write from the console. In this case, the agent prints out "hello world" to the console.

2.3.2 Subgoals

The second example illustrates the use of goals in an AgentSpeak(L) program. The ASTRA equivalent is:

01 agent Subby {
02     module Console C;
03 
04     initial !init();
05 
06     rule +!init() {
07         !printHello();
08     }
09
10     rule +!printHello() {
11         C.println("hello world");
12     }
13 }

Notice that in this example, we use identifier C for the Console module. This is to illustrate that the identifier does not need to match the name of the module. The expected behaviour of this program is that the agent adopts an initial !init() goal. It handles this goal through the first rule that begins on line 06. This rule causes the agent to adopt a subgoal to !printHello() which is handled through the second rule starting on line 10. This second rule prints out "hello world". Neither of the goals used in this example are declarative.

2.3.3 Belief Queries

The third example below show an ASTRA agent that uses the belief query plan operator. It also includes an initial belief (as well as an initial goal).

01 agent Query {
02     module Console C;
03 
04     types eg {
05         formula is(string, string);
06     }
07 
08     initial !init();
09     initial is("rem", "happy");
10
11     rule +!init() {
12         C.println("starting");
13         query(is("rem", "happy"));
14         C.println("first hurdle passed");
15         query(is("rem", "sad"));
16         C.println("ending");
17     }
18 }

Notice that the query operator “?” is replaced by the “query” keyword in ASTRA. Additionally, for the first time, we see the introduction of the types … keyword. This keyword is used to specify the types of beliefs that are valid for a given program. They are used for type checking – if you use a belief that is not declared in this section of the program, the compiler will generate an error indicating that the belief has not been specified. The identifier “eg” is used to associate a unique label with the types. A program can have many type blocks.

The expected output of this program is that the agent will print out "starting" and "first hurdle passed". It will not print out "ending" because the query on line 15 fails causing the plan to fail.

2.3.4 Belief Update

This final example illustrates how AgentSpeak(L) permits the modification of the agents internal state through the belief update plan operators. The ASTRA equivalent is:

01 agent Uppy {
02     module Console C;
03
04     types uppy {
05         formula light(string);
06     }
07
08     initial light("on");
09
10     rule +light("on") {
11         C.println("the light is on, turn it off!");
12         -light("on");
13         +light("off");
14     }
15     
16     rule +light("off") {
17         C.println("the light is off, turn it on!");
18         -light("off");
19         +light("on");
20     }
21 }

2.4 Designing Agent Programs in ASTRA

This guide aims to walk you through the process of designing and implementing an agent. The type of agent we will develop is purely mental, in that everything that happens occurs in the "mind" of the agent. Specifically, we will build a mental model of a light switch and the attached light. The scenario is taken from the previous section. In this guide, we will develop rules that allow the agent to manipulate the light, turning it on and off through the use of the lightswitch. The starting point for this guide is the LightSwitch agent program:

agent LightSwitch {
    types lightExample {
        formula switch(string);
        formula light(string);
    }

    initial switch("up"), light("off");

    rule +!light(string X) : light(X) { }

    rule +!light("on") {
        -light("off");
        +light("on");
    }

    rule +!light("off") {
        -light("on");
        +light("off");
    }
}

As was described at the end of that guide, this solution is not really optimal as it does not consider the state of the switch. An alternative solution is to replace the rules to handle the !light(...) goals with a different rule:

rule +!light("on") {
    !flipSwitch();
}

Unlike the !light(...) goal, this subgoal is not a declarative goal. We could have used the declarative goal !switch("down"), but this assumes that the down position on the switch means "off", but depending on the wiring, it could also mean on, or could be non-deterministic if there are two light switches... Flipping the switch is a more appropriate abstraction of the goal as it means that the switch should change from one state to the other.

Before moving on, its worth noting that, when we put the two rules together, we can simplify them a little:

rule +!light("on") : light("on") {}

rule +!light("on") {
    !flipSwitch();
}

Notice that we have dropped the context on the second rule. This is because ASTRA (and AgentSpeak(L)) use rule order to select which rule to apply to the given event. When overloading rules in this way, you can think of the last rule as representing the default behaviour and earlier rules representing specific cases that override the default behaviour.

For the flipped switch goal, what we want to do is to capture the following: if the switch is down, then move it up. Alternatively, if the switch is up, move it down. We can capture this through two rules in the same way that we did with the light on:

rule +!flipSwitch() : switch("down") {
    !switch("up");
}

rule +!flipSwitch() {
    !switch("down");
}

This is the ASTRA equivalement of an if-else statement where we have one rule for each switch state. An alternative approach would be to model the transitions of the switch and to write a single general rule. This requires a little more work, but it is a nice solution. First, we need to add a belief to model a transition. Lets use:

types lightExample {
    ...
    formula switchTransition(string, string); // first arg is current state, second arg is target state.
}

Now, we can use this belief template to create beliefs that map the transition:

initial switchTransition("down", "up"), switchTransition("up", "down");

This leads to a new !flipSwitch(...) rule:

rule +!flipSwitch() : switch(string X) & switchTransition(X, string Y) {
    !switch(Y);
}

This new rule basically says: if we want to flip the switch and we know it is in position X and we know that X transitions to Y, then adopt a goal to set the switch to position Y.

The last part then is to deal with the !switch(...) goal. This goal is similar to the flip switch goal, but instead of resulting in a subgoal, results in a changing of the agents state:

// if the switch is already in state X, do nothing
rule +!switch(string X) : switch(X) {}

// if the switch is not is state X, change it to state X
rule +!switch(string X): switch(string Y) & switchTransition(Y, X) {
    -switch(Y);
    +switch(X);
}

This rule causes the switch belief to be updated to reflect the changing of the switch state, but there is an indirect impact on the state of the light - it must turn on or off respectively. We can use a similar rule to theone we used to handle the !flipSwitch() goal:

types {
    ...
    formula lightTransition(string, string);
}

initial lightTransition("off", "on"), lightTransition("on", "off");

rule +switch(string X) : light(string Y) & lightTransition(Y, string Z) {
    -light(Y);
    +light(Z);
}

Informally, the rule states: if the state of the switch changes and the state of the light is Y, and you know that Y transitions to Z, update the state of the switch from Y to Z.

Finally, lets put all this together into a single program:

agent LightSwitch {
    types lightExample {
        formula switch(string); // the argument should be "up" or "down"
        formula light(string); // the argument should be "on" or "off"
        formula switchTransition(string, string); // first arg is current state, second arg is target state.
        formula lightTransition(string, string);
    }

    initial lightTransition("off", "on"), lightTransition("on", "off");
    initial switchTransition("down", "up"), switchTransition("up", "down");
    initial switch("up"), light("off");

    rule +!light(string X) : light(X) { }

    rule +!light(string X) {
        !flipSwitch();
    }

    rule +!flipSwitch() : switch(string X) & switchTransition(X, string Y) {
        !switch(Y);
    }

    // if the switch is already in state X, do nothing
    rule +!switch(string X) : switch(X) {}

    // if the switch is not is state X, change it to state X
    rule +!switch(string X): switch(string Y) & switchTransition(Y, X) {
        -switch(Y);
        +switch(X);
    }

    rule +switch(string X) : light(string Y) & lightTransition(Y, string Z) {
        -light(Y);
        +light(Z);
    }
}

There are some obvious questions about the above code:

  • How does the agent know when to change the light?

  • The agent changes its belief about the lights state, but how do we link it to an actual light?

Both of these questions relate to the same issue - how to link the agent to external systems. In ASTRA, we do this through the use of Modules. In fact, we would not normally model the relationship between the switch and the light. This information would be hidden behind the module. Normally, an agent performs actions that affect their environment. They then observe the effect of that action on the environment. We will discuss how to implement modules later in section Modules.