2 // Fraunhofer Institut fuer offene Kommunikationssysteme (FOKUS)
3 // Kompetenzzentrum fuer Satelitenkommunikation (SatCom)
4 // Stefan Bund <g0dil@berlios.de>
6 // This program is free software; you can redistribute it and/or modify
7 // it under the terms of the GNU General Public License as published by
8 // the Free Software Foundation; either version 2 of the License, or
9 // (at your option) any later version.
11 // This program is distributed in the hope that it will be useful,
12 // but WITHOUT ANY WARRANTY; without even the implied warranty of
13 // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 // GNU General Public License for more details.
16 // You should have received a copy of the GNU General Public License
17 // along with this program; if not, write to the
18 // Free Software Foundation, Inc.,
19 // 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
21 /** \mainpage libPPI : The Packet Processing Infrastructure
23 The PPI provides an infrastructure to create packet oriented network processin
24 applications. A PPI application is built by combining processing modules in a very flexible
27 \image html scenario.png Target Scenario
29 The PPI concept is built around some key concepts
31 \li The PPI is based on processing \ref packets. It does not handle stream oriented channels.
32 \li The PPI is built around reusable \ref modules. Each module is completely independent.
33 \li Each module has an arbitrary number of \ref connectors, inputs and outputs.
34 \li The modules are connected to each other using flexible \ref connections.
35 \li Data flow throughout the network is governed via flexible automatic or manual \ref throttling.
36 \li Modules may register additional external \ref events (file descriptor events or timers).
38 The PPI thereby builds on the facilities provided by the other components of the SENF
39 framework. The target scenario above depicts a diffserv capable UDLR/ULE router including
40 performance optimizations for TCP traffic (PEP). This router is built by combining several
43 \section design Design considerations
45 The PPI interface is designed to be as simple as possible. It provides sane defaults for all
46 configurable parameters to simplify getting started. It also automates all resource
47 management. Especially to simplify resource management, the PPI will take many configuration
48 objects by value. Even though this is not as efficient, it frees the user from most resource
49 management chores. This decision does not affect the runtime performance since it only affects
50 the configuration step.
52 \section packets Packets
54 The PPI processes packets and uses the <a href="@TOPDIR@/Packets/doc/html/index.html">Packet
55 library</a> to handle them. All packets are passed around as generic Packet::ptr's, the PPI
56 does not enforce any packet type restrictions.
58 \section modules Modules
60 A module is represented by a class type. Each module has several components:
62 \li It may have any number of connectors (inputs and outputs)
63 \li Each module declares flow information which details the route packets take within the
64 module. This information does not define how the information is processed, it only tells,
65 where data arriving on some input will be directed at.
66 \li The module might take additional parameters.
67 \li The module might also register additional events.
69 Modules are divided roughly in to two categories: I/O modules provide packet sources and sinks
70 (network connection, writing packets to disk, generating new packets) whereas processing modules
71 process packets internally. In the target scenario, <em>TAP</em>, <em>ASI Out</em>, <em>Raw
72 Socket</em> and in a limited way <em>Generator</em> are I/O modules whereas <em>PEP</em>,
73 <em>DiffServ</em>, <em>DVB Enc</em>, <em>GRE/UDLR</em>, <em>TCP Filter</em> and
74 <em>Stuffer</em>are processing modules. <em>ASI/MPEG</em> and <em>Net</em> are external I/O
75 ports which are integrated via the <em>TAP</em>, <em>ASI Out</em> and <em>Raw Sock</em> modules
76 using external events.
78 The following example module declares three I/O connectors (see below): <tt>payload</tt>,
79 <tt>stuffing</tt> and <tt>output</tt>. These connectors are defined as <em>public</em> data
80 members so they can be accessed from the outside. This is important as we will see below.
84 : public senf::ppi::Module
91 RateStuffer(unsigned packetsPerSecond)
93 route(payload, output);
94 route(stuffing, output);
96 registerEvent(&RateStuffer::tick,
97 senf::ppi::IntervalTimer(1000u, packetsPerSecond));
111 On module instantiation, it will declare it's flow information with <tt>route</tt> (which
112 is inherited from <tt>senf::ppi::Module</tt>). Then the module registers an interval timer which
113 will fire <tt>packetsPerSecond</tt> times every <tt>1000</tt> milliseconds.
115 The processing of the module is very simple: Whenever a timer tick arrives a packet is sent. If
116 the <tt>payload</tt> input is ready (see throttling below), a payload packet is sent, otherwise
117 a stuffing packet is sent. The module will therefore provide a constant stream of packets at a
118 fixed rate on <tt>output</tt>
120 An example module to generate the stuffing packets could be
123 class CopyPacketGenerator
124 : public senf::ppi::Module
127 PassiveOutput output;
129 CopyPacketGenerator(Packet::ptr template)
130 : template_ (template)
133 output.onRequest(&CopyPacketGenerator::makePacket);
137 Packet::ptr template_;
141 output(template_.clone());
146 This module just produces a copy of a given packet whenever output is requested.
148 \section connectors Connectors
150 Inputs and Outputs can be active and passive. An \e active I/O is <em>activated by the
151 module</em> to send data or to poll for available packets. A \e passive I/O is <em>signaled by
152 the framework</em> to fetch data from the module or to pass data into the module.
154 To send or receive a packet (either actively or after packet reception has been signaled), the
155 module just calls the connector. This allows to generate or process multiple packets in one
156 iteration. However, reading will only succeed, as long as packets are available from the
159 Since a module is free to generate more than a single packet on incoming packet requests, all
160 input connectors incorporate a packet queue. This queue is exposed to the module and allows the
161 module to process packets in batches.
163 \section connections Connections
165 To make use of the modules, they have to be instantiated and connections have to be created
166 between the I/O connectors. It is possible to connect any pair of input/output connectors as
167 long as one of them is active and the other is passive.
169 It is possible to connect two active connectors with each other using a special adaptor
170 module. This Module has a passive input and a passive output. It will queue any incoming packets
171 and automatically handle throttling events (see below). This adaptor is automatically added by
172 the connect method if needed.
174 To complete our simplified example: Lets say we have an <tt>ActiveSocketInput</tt> and a
175 <tt>PassiveUdpOutput</tt> module. We can then use our <tt>RateStuffer</tt> module to build an
176 application which will create a fixed-rate UDP stream:
179 RateStuffer rateStuffer (10);
181 senf::Packet::ptr stuffingPacket = senf::Packet::create<...>(...);
182 CopyPacketGenerator generator (stuffingPacket);
184 senf::UDPv4ClientSocketHandle inputSocket (1111);
185 senf::ppi::ActiveSocketInput udpInput (inputSocket);
187 senf::UDPv4ClientSocketHandle outputSocket ("2.3.4.5:2222");
188 senf::ppi::PassiveSocketOutput udpOutput (outputSocket);
190 senf::ppi::connect(udpInput.output, rateStuffer.payload,
191 dynamicModule<PassiveQueue>()
192 -> qdisc(ThresholdQueueing(10,5)) );
193 senf::ppi::connect(generator.output, rateStuffer.stuffing);
194 senf::ppi::connect(rateStuffer.output, udpOutput.input);
199 First all necessary modules are created. Then the connections between these modules are set
200 up. The buffering on the udpInput <-> rateStuffer adaptor is changed so the queue will begin to
201 throttle only if more than 10 packets are in the queue. The connection will be unthrottled as
202 soon as there are no more than 5 packets left in the queue. This application will read
203 udp-packts coming in on port 1111 and will forward them to port 2222 on host 2.3.4.5 with a
204 fixed rate of 10 packets / second.
206 \section throttling Throttling
208 If a passive connector cannot handle incoming requests, this connector may be \e
209 throttled. Throttling a request will forward a throttle notification to the module connected
210 to that connector. The module then must handle this throttle notification. If automatic
211 throttling is enabled for the module (which is the default), the notification will automatically
212 be forwarded to all dependent connectors as taken from the flow information. For there it will
213 be forwarded to further modules and so on.
215 A throttle notification reaching an I/O module will normally disable the input/output by
216 disabling any external I/O events registered by the module. When the passive connector which
217 originated the notification becomes active again, it creates an unthrottle notification which
218 will be forwarded in the same way. This notification will re-enable any registered I/O events.
220 The above discussion shows, that throttle events are always generated on passive connectors and
221 received on active connectors. To differentiate further, the throttling originating from a
222 passive input is called <em>backward throttling</em> since it is forwarded in the direction \e
223 opposite to the data flow. Backward throttling notifications are sent towards the input
224 modules. On the other hand, the throttling originating from a passive output is called
225 <em>forward throttling</em> since it is forwarded along the \e same direction the data
226 is. Forward throttling notifications are therefore sent towards the output modules.
228 Since throttling a passive input may not disable all further packet delivery immediately, any
229 passive input contains an input queue. In it's default configuration, the queue will send out
230 throttle notifications when it becomes non-empty and unthrottle notifications when it becomes
231 empty again. This automatic behavior may however be disabled. This allows a module to collect
232 incoming packets in it's input queue before processing a bunch of them in one go.
234 \section events Events
236 Modules may register additional events. These external events are very important since the drive
237 the PPI framework. Possible event sources are
238 \li time based events
239 \li file descriptors.
241 Here some example code implementing the ActiveSocketInput Module:
244 class ActiveSocketInput
245 : public senf::ppi::Module
247 static PacketParser<senf::DataPacket> defaultParser_;
252 typedef senf::ClientSocketHandle<
253 senf::MakeSocketPolicy< senf::ReadablePolicy,
254 senf::DatagramFramingPolicy > > Socket;
256 // I hestitate taking parser by const & since a const & can be bound to
257 // a temporary even though a const & is all we need. The real implementation
258 // will probably make this a template arg. This simplifies the memory management
259 // from the users pov.
260 ActiveSocketInput(Socket socket, DataParser & parser = SocketInput::defaultParser_)
263 event_ (registerEvent( &ActiveSocketInput::data,
264 senf::ppi::IOSignaler(socket, senf::ppi::IOSignaler::Read) ))
266 route(event_, output);
271 DataParser const & parser_;
272 senf::ppi:IOSignaler::EventBinding event_;
278 output(parser_(data));
283 First we declare our own socket handle type which allows us to read packets. The constructor
284 then takes two arguments: A compatible socket and a parser object. This parser object gets
285 passed the packet data as read from the socket (an \c std::string) and returns a
286 senf::Packet::ptr. The \c PacketParser is a simple parser which interprets the data as specified
287 by the template argument.
289 We register an IOSignaler event. This event will be signaled whenever the socket is
290 readable. This event is routet to the output. This routing automates throttling for the socket:
291 Whenever the output receives a throttle notifications, the event will be temporarily disabled.
293 Processing arriving packets happens in the \c data() member: This member simple reads a packet
294 from the socket. It passes this packet to the \c parser_ and sends the generated packet out.
297 \li Exception handling. It would be great to have a sane default exception handling freeing us
298 from most manual work. However, I don't think this is feasible.
300 \see \ref ppi_implementation
303 /** \page ppi_implementation Implementation Overview
305 \section processing Data Processing
307 The processing in the PPI is driven by external events. Without external events <em>nothing will
308 happen</em>. When an external event is generated, the module called will probably either send or
309 receive data from an active connector.
311 Calling an active connector will directly call the handler registered at the connected passive
312 connector. This way the call and data are handed across the connections until an I/O module will
313 finally handle the request (by not calling any other connectors).
315 Throttling is handled in the same way: Throttling a passive connector will call a corresponding
316 (internal) method of the connector active connector. This method will call registered handlers
317 and will analyze the routing information of the module for other (passive) connectors to call
318 and throttle. This will again create a call chain which terminates at the I/O modules. An event
319 which is called to be throttled will disable the event temporarily. Unthrottling works in the
322 This simple structure is complicated by the existence of the input queues. This affects both
323 data forwarding and throttling:
324 \li A data request will only be forwarded, if no data is available in the queue
325 \li The connection will only be throttled when the queue is empty
326 \li Handlers of passive input connectors must be called repeatedly until either the queue is
327 empty or the handler does not take any packets from the queue
330 \section logistics Managing the Data Structures
332 The PPI itself is a singleton. This simplifies many of the interfaces (We do not need to pass
333 the PPI instance). Should it be necessary to have several PPI systems working in parallel
334 (either by registering all events with the same event handler or by utilizing multiple threads),
335 we can still extend the API by adding an optional PPI instance argument.
337 Every module manages a collection of all it's connectors and every connector has a reference to
338 it's containing module. In addition, every connector maintains a collection of all it's routing
341 All this data is initialized via the routing statements. This is, why \e every connector must
342 appear in at least one routing statement: These statements will as a side effect initialize the
343 connector with it's containing module.
345 Since all access to the PPI via the module is via it's base class, unbound member function
346 pointers can be provided as handler arguments: They will automatically be bound to the current
347 instance. This simplifies the PPI usage considerably. The same is true for the connectors: Since
348 they know the containing module, they can explicitly bind unbound member function pointers to
352 \section random_notes Random implementation notes
354 Generation of throttle notifications: Backward throttling notifications are automatically
355 generated (if this is not disabled) whenever the input queue is non-empty \e after the event
356 handler has finished processing. Forward throttling notifications are not generated
357 automatically within the connector. However, the Passive-Passive adaptor will generate
358 Forward-throttling notifications whenever the input queue is empty.
365 // c-file-style: "senf"
366 // indent-tabs-mode: nil
367 // ispell-local-dictionary: "american"