Given the goals that we've written, we need to reduce our information to individual statements that can later be used in the software requirements document. Software requirements are basically goals that are assigned to a single agent. System requirements are goals that involve multiple agents. A goal might correspond to a system requirement or it may not, depending on which system agents are involved in its satisfaction. These may relate to the system that was. But in the Requirements section of the Requirements document, requirements necessarily involve the system to be. We will also have statements that are stating assumptions. As with goals, assumptions may be prescriptive or descriptive. Prescriptive assumptions are our exceptions. They are not fully fulfilled by the system to be, but, instead, relate to basically our hopeful assumptions. For example, passengers will get off the train when the door opens at their destination. Fingers crossed they don't miss their stop or get off before, get off after or jump out somehow in between. Descriptive statements are divided into domain properties and domain hypotheses. Domain properties and domain hypotheses are descriptive statements about the environment and they hold invariably regardless of the system behavior. Descriptive statements are either preassigned by organizational components or maybe even by Mother Nature. For example, a domain hypothesis may be that train tracks are in a good condition except for if the track segment X is under maintenance. This is descriptive but it is not a domain property. A domain property would be that a train is moving if and only if its physical speed is non-null. As another example, if we go back quickly to our library system, another domain hypothesis could be that the university library has about 20,000 patrons. In our meeting system, a domain property would be that Saturdays are excluded dates for meetings. Goals can be classified along two dimensions, depending on whether it prescribes the intended system behaviors or prescribes the preferences along alternative behaviors. In what you're thinking about, is it a functionality constraint or is it a quality constraint? Behavioral and soft goals may be viewed in a taxonomy and they do not overlap. Behavioral goals prescribe behaviors. Soft goals have a bit of fluffiness to them. They allow you to state your preferences along alternative behaviors. When stating behavioral goals, we should be able to write them with verbs like achieve, maintain or avoid. Again, behavioral goals are prescribing the intended system behaviors declaratively. They implicitly define the maximal set of the admissible agent behaviors. When you look at a behavioral goal, you should be able to say, Yup, we succeeded or, No, we did not. Very cut and dry. Behavioral goals can also be stated in a partial sense. Please do see the reading for information about partial goal satisfaction or soft goals and its challenges. 100 percent goal satisfaction on a system level allows us to build operation models. These operation models show that we meet the behavioral goal entirely. These statements can be diagrammed or modeled and are usually done so using UML use case diagrams or state diagrams. You can also take these statements and translate them into languages that can be used in more formal analysis. A language you might use that's very popular right now is Z. Behavioral goals allow us to write statements such as, The worst-case stopping distance of the train is maintained, or, let's see. If we go to our library system, then we could ask, Is a reminder sent out if the book is not returned by its due date? These are clear-cut behavioral goals. Note that these two statements also rely on other definitions. Those also need to be specified. How we cover definitions will be discussed in the SRS documentation and diagramming course. As a behavioral goal, we may state that all train doors shall always remain closed while the train is moving. The diagram here shows one admissible sequence of state transitions that are implicitly defined by that goal. There are two controlled state variables here. What are they? The first is train movement. The second is train doors. Each state is aggregating two substates, one for controlling the movement and the other for controlling the doors. We can represent the desired behavior of the train in a diagram like this. Obviously, this is not covering all desired behavior for the train. It is only showing for this goal. On the top of each circle, we have the status of the train movement. On the bottom, we have the status of the doors. This behavioral sequence demonstrates the overall desired behavior between those two agents.