2 // Fraunhofer Institut fuer offene Kommunikationssysteme (FOKUS)
3 // Kompetenzzentrum fuer Satelitenkommunikation (SatCom)
4 // Stefan Bund <g0dil@berlios.de>
6 // This program is free software; you can redistribute it and/or modify
7 // it under the terms of the GNU General Public License as published by
8 // the Free Software Foundation; either version 2 of the License, or
9 // (at your option) any later version.
11 // This program is distributed in the hope that it will be useful,
12 // but WITHOUT ANY WARRANTY; without even the implied warranty of
13 // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 // GNU General Public License for more details.
16 // You should have received a copy of the GNU General Public License
17 // along with this program; if not, write to the
18 // Free Software Foundation, Inc.,
19 // 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
21 /** \mainpage libPPI : The Packet Processing Infrastructure
23 The PPI provides an infrastructure to create packet oriented network processin
24 applications. A PPI application is built by combining processing modules in a very flexible
27 \image html scenario.png Target Scenario
29 The PPI concept is built around some key concepts
31 \li The PPI is based on processing \ref packets. It does not handle stream oriented channels.
32 \li The PPI is built around reusable \ref modules. Each module is completely independent.
33 \li Each module has an arbitrary number of \ref connectors, inputs and outputs.
34 \li The modules are connected to each other using flexible \ref connections.
35 \li Data flow throughout the network is governed via flexible automatic or manual \ref throttling.
36 \li Modules may register additional external \ref events (file descriptor events or timers).
38 The PPI thereby builds on the facilities provided by the other components of the SENF
41 Modules are divided roughly in to two categories: I/O modules provide packet sources and sinks
42 (network connection, writing packets to disk, generating new packets) whereas processing modules
43 process packets internally. The target scenario above depicts a diffserv capable UDLR/ULE
44 router including performance optimizations for TCP traffic (PEP). This router is built by
45 combining several modules. In this scenario, <em>TAP</em>, <em>ASI Out</em>, <em>Raw Socket</em>
46 and in a limited way <em>Generator</em> are I/O modules whereas <em>PEP</em>, <em>DiffServ</em>,
47 <em>DVB Enc</em>, <em>GRE/UDLR</em>, <em>TCP Filter</em> and <em>Stuffer</em>are processing
48 modules. <em>ASI/MPEG</em> and <em>Net</em> are external I/O ports which are integrated via the
49 <em>TAP</em>, <em>ASI Out</em> and <em>Raw Sock</em> modules using external events.
51 \section packets Packets
53 The PPI processes packets and uses the <a href="@TOPDIR@/Packets/doc/html/index.html">Packet
54 library</a> to handle them. All packets are passed around as generic Packet::ptr's, the PPI
55 does not enforce any packet type restrictions.
57 \section modules Modules
59 A module is represented by a class type. Each module has several components:
61 \li It may have any number of connectors (inputs and outputs)
62 \li Each module declares flow information which details the route packets take within the
63 module. This information does not define how the information is processed, it only tells,
64 where data arriving on some input will be directed at.
65 \li The module might take additional parameters.
66 \li The module might also register additional events.
70 : public senf::ppi::Module
77 RateStuffer(unsigned packetsPerSecond)
79 route(payload, output);
80 route(stuffing, output);
82 registerEvent(&RateStuffer::tick,
83 senf::ppi::IntervalTimer(1000u, packetsPerSecond));
97 This module declares three I/O connectors (see below): <tt>payload</tt>, <tt>stuffing</tt> and
98 <tt>output</tt>. These connectors are defined as <em>public</em> data members so they can be
99 accessed from the outside. This is important as we will see below.
101 On module instantiation, it will declare it's flow information with <tt>route</tt> (which
102 is inherited from <tt>senf::ppi::Module</tt>). Then the module registers an interval timer which
103 will fire <tt>packetsPerSecond</tt> times every <tt>1000</tt> milliseconds.
105 The processing of the module is very simple: Whenever a timer tick arrives a packet is sent. If
106 the <tt>payload</tt> input is ready (see throttling below), a payload packet is sent, otherwise
107 a stuffing packet is sent. The module will therefore provide a constant stream of packets at a
108 fixed rate on <tt>output</tt>
110 An example module to generate the stuffing packets could be
113 class CopyPacketGenerator
114 : public senf::ppi::Module
117 PassiveOutput output;
119 CopyPacketGenerator(Packet::ptr template)
120 : template_ (template)
123 output.onRequest(&CopyPacketGenerator::makePacket);
127 Packet::ptr template_;
131 output(template_.clone());
136 This module just produces a copy of a given packet whenever output is requested.
138 \section connectors Connectors
140 Inputs and Outputs can be active and passive. An \e active I/O is <em>activated by the
141 module</em> to send data or to poll for available packets. A \e passive I/O is <em>signaled by
142 the framework</em> to fetch data from the module or to pass data into the module.
144 To send or receive a packet (either actively or after packet reception has been signaled), the
145 module just calls the connector. This allows to generate or process multiple packets in one
146 iteration. However, reading will only succeed, as long as packets are available from the
149 Since a module is free to generate more than a single packet on incoming packet requests, all
150 input connectors incorporate a packet queue. This queue is exposed to the module and allows the
151 module to process packets in batches.
153 \section connections Connections
155 To make use of the modules, they have to be instantiated and connections have to be created
156 between the I/O connectors. It is possible to connect any pair of input/output connectors as
157 long as one of them is active and the other is passive.
159 It is possible to connect two active connectors with each other using a special adaptor
160 module. This Module has a passive input and a passive output. It will queue any incoming packets
161 and automatically handle throttling events (see below). This adaptor is automatically added by
162 the connect method if needed.
164 To complete our simplified example: Lets say we have an <tt>ActiveSocketInput</tt> and a
165 <tt>PassiveUdpOutput</tt> module. We can then use our <tt>RateStuffer</tt> module to build an
166 application which will create a fixed-rate UDP stream:
169 RateStuffer rateStuffer (10);
171 senf::Packet::ptr stuffingPacket = senf::Packet::create<...>(...);
172 CopyPacketGenerator generator (stuffingPacket);
174 senf::UDPv4ClientSocketHandle inputSocket (1111);
175 senf::ppi::ActiveSocketInput udpInput (inputSocket);
177 senf::UDPv4ClientSocketHandle outputSocket ("2.3.4.5:2222");
178 senf::ppi::PassiveSocketOutput udpOutput (outputSocket);
180 senf::ppi::connect(udpInput.output, rateStuffer.payload,
181 dynamicModule<PassiveQueue>()
182 -> qdisc(ThresholdQueueing(10,5)) );
183 senf::ppi::connect(generator.output, rateStuffer.stuffing);
184 senf::ppi::connect(rateStuffer.output, udpOutput.input);
189 First all necessary modules are created. Then the connections between these modules are set
190 up. The buffering on the udpInput <-> rateStuffer adaptor is changed so the queue will begin to
191 throttle only if more than 10 packets are in the queue. The connection will be unthrottled as
192 soon as there are no more than 5 packets left in the queue. This application will read
193 udp-packts coming in on port 1111 and will forward them to port 2222 on host 2.3.4.5 with a
194 fixed rate of 10 packets / second.
196 \section throttling Throttling
198 If a passive connector cannot handle incoming requests, this connector may be \e
199 throttled. Throttling a request will forward a throttle notification to the module connected
200 to that connector. The module then must handle this throttle notification. If automatic
201 throttling is enabled for the module (which is the default), the notification will automatically
202 be forwarded to all dependent connectors as taken from the flow information. For there it will
203 be forwarded to further modules and so on.
205 A throttle notification reaching an I/O module will normally disable the input/output by
206 disabling any external I/O events registered by the module. When the passive connector which
207 originated the notification becomes active again, it creates an unthrottle notification which
208 will be forwarded in the same way. This notification will re-enable any registered I/O events.
210 The above discussion shows, that throttle events are always generated on passive connectors and
211 received on active connectors. To differentiate further, the throttling originating from a
212 passive input is called <em>backward throttling</em> since it is forwarded in the direction \e
213 opposite to the data flow. Backward throttling notifications are sent towards the input
214 modules. On the other hand, the throttling originating from a passive output is called
215 <em>forward throttling</em> since it is forwarded along the \e same direction the data
216 is. Forward throttling notifications are therefore sent towards the output modules.
218 Since throttling a passive input may not disable all further packet delivery immediately, any
219 passive input contains an input queue. In it's default configuration, the queue will send out
220 throttle notifications when it becomes non-empty and unthrottle notifications when it becomes
221 empty again. This automatic behavior may however be disabled. This allows a module to collect
222 incoming packets in it's input queue before processing a bunch of them in one go.
224 \section events Events
226 Modules may register additional events. These external events are very important since the drive
227 the PPI framework. Possible event sources are
228 \li time based events
229 \li file descriptors.
231 Here some example code implementing the ActiveSocketInput Module:
234 class ActiveSocketInput
235 : public senf::ppi::Module
237 static PacketParser<senf::DataPacket> defaultParser_;
242 typedef senf::ClientSocketHandle<
243 senf::MakeSocketPolicy< senf::ReadablePolicy,
244 senf::DatagramFramingPolicy > > Socket;
246 // I hestitate taking parser by const & since a const & can be bound to
247 // a temporary even though a const & is all we need. The real implementation
248 // will probably make this a template arg. This simplifies the memory management
249 // from the users pov.
250 ActiveSocketInput(Socket socket, DataParser & parser = SocketInput::defaultParser_)
253 event_ (registerEvent( &ActiveSocketInput::data,
254 senf::ppi::IOSignaler(socket, senf::ppi::IOSignaler::Read) ))
256 route(event_, output);
261 DataParser const & parser_;
262 senf::ppi:IOSignaler::EventBinding event_;
268 output(parser_(data));
273 First we declare our own socket handle type which allows us to read packets. The constructor
274 then takes two arguments: A compatible socket and a parser object. This parser object gets
275 passed the packet data as read from the socket (an \c std::string) and returns a
276 senf::Packet::ptr. The \c PacketParser is a simple parser which interprets the data as specified
277 by the template argument.
279 We register an IOSignaler event. This event will be signaled whenever the socket is
280 readable. This event is routet to the output. This routing automates throttling for the socket:
281 Whenever the output receives a throttle notifications, the event will be temporarily disabled.
283 Processing arriving packets happens in the \c data() member: This member simple reads a packet
284 from the socket. It passes this packet to the \c parser_ and sends the generated packet out.
286 \implementation Generation of throttle notifications: Backward throttling notifications are
287 automatically generated (if this is not disabled) whenever the input queue is non-empty \e
288 after the event handler has finished processing. Forward throttling notifications are not
289 generated automatically within the connector. However, the Passive-Passive adaptor will
290 generate Forward-throttling notifications whenever the input queue is empty.
293 \li We need to clearly differentiate between auto-throttling and auto-throttle-forwarding,
294 between a connectors own throttling state and the forwarded state.
295 \li Exception handling
296 \li ActiveInputs also need a queue: This is necessary to allow a PassiveOutput to create more
297 than a single packet from a single 'onRequest' event. This greatly simplifies writing
298 modules which produce multiple output packets for a single input packet.
299 \li We need to clear up the throttled() member semantics: If the connector is throttled, does
300 it return so if there are still packets in the queue? Probably yes. However, it does not
301 forward throttling notifications until instructed by the qdisc. Throttling notifications are
302 also bound to onThrottle/onUnThrottle callbacks. The semantics are then clear: An active
303 connector emitting onThrottle cannot process any further request (for inputs, no data will
304 be available, for outputs the data will be queued in the peer input)
311 // c-file-style: "senf"
312 // indent-tabs-mode: nil
313 // ispell-local-dictionary: "american"