33
I Use This!
Activity Not Available

News

Posted almost 9 years ago
Introduction Previously I have written about connectivity options for IoT devices and today I assume that a cellular technology (e.g. names like GSM, 3G, UMTS, LTE, 4G) has been chosen. Unless you are a big vendor you will end up using a module ... [More] (instead of a chipset) and either you are curious what the module is doing behind its AT command interface or you are trying to understand a real problem. The following is going to help you or at least be entertaining. The xgoldmon project was a first to provide air interface traces and logging to the general public but it was limited to Infineon baseband (and some Gemalto devices), needed special commands to enable and didn’t include all messages all the time. In the last months I have intensively worked with modules of a vendor called Quectel. They are using Qualcomm chipsets and have built the GSM/UMTS Quectel UC20 and the GSM/UMTS/LTE Quectel EC20 modules. They are available as a variant to solder but for speeding up development they provide them as miniPCI express as well. I ended up putting them into a PCengines APU2, soldered an additional SIM card holder for the second SIM card, placed U.FL to SMA connectors and put it into one of their standard cases. While the UC20 and EC20 are pretty similar the software is not the same and some basic features are missing from the EC20, e.g. the SIM ToolKit support. The easiest way to acquire these modules seems to be Amazon but if you are based in the EU I can point you to a distributor. The extremely nice feature is that both modules export Qualcomm’s bi-directional DIAG debug interface by USB (without having to activate it through an undocumented AT command). It is a framed protocol with a simple checksum at the end of a frame and many general (e.g. logging and how regions are described) types of frames are known and used in projects like ModemManager to extract additional information. Some parts that include things like Tx-power are not well understood yet. I have made a very simple utility available on github that will enable logging and then convert radio messages to the Osmocom GSMTAP protocol and send it to a remote host using UDP or write it to a pcap file. The result can be analyzed using wireshark. Set-up You will need a new enough Linux kernel (e.g. >= Linux 4.4) to have the modems be recognized and initialized properly. This will create four ttyUSB serial devices, a /dev/cdc-wdmX and a wwanX interface. The later two can be used to have data as a normal network interface instead of launching pppd. In short these modules are super convenient to add connectivity to a product. PCengines APU2 with Quectel EC20 and Quectel UC20 Building The repository includes a shell script to build some dependencies and the main utility. You will need to install autoconf, automake, libtool, pkg-config, libtalloc, make, gcc on your Linux distribution. git clone git://github.com/moiji-mobile/diag-parser cd diag-parser ./build/build_local.sh Running Assuming that your modem has exposed the DIAG debug interface on /dev/ttyUSB0 and you have your wireshark running on a system with the internal IPv4 address of 10.23.42.7 you can run the following command. ./diag-parser -g 10.23.42.7 -i /dev/ttyUSB0 Exploring Analyzing UMTS with wireshark. The below shows a UMTS capture taken with the Quectel module. It allows you to see the radio messages used to register to the network, when sending a SMS and when placing calls. Wireshark dissecting UMTS [Less]
Posted almost 9 years ago
Many of us deal or will deal with (connected) M2M/IoT devices. This might be writing firmware for microcontrollers, using a RTOS like NuttX or a full blown Unix (like) operating system like FreeBSD or Yocto/Poky Linux, creating and building code to ... [More] run on the device, processing data in the backend or somewhere inbetween. Many of these devices will have sensors to collect data like GNSS position/time, temperature, light detector, measuring acceleration, see airplanes, detect lightnings, etc.The backend problem is work but mostly “solved”. One can rely on something like Amazon IoT or creating a powerful infrastructure using many of the FOSS options for message routing, data storage, indexing and retrieval in C++. In this post I want to focus about the little detail of how data can go from the device to the backend. Image from Wikipedia (CC BY-SA 3.0 by Herzi Pinki) To make this thought experiment a bit more real let’s imagine we want to build a bicycle lock/tracker. Many of my colleagues ride their bicycle to work and bikes being stolen remains a big tragedy. So the primary focus of an IoT device would be to prevent theft (make other bikes a more easy target) or making selling a stolen bicycle more difficult (e.g. by easily checking if something has been stolen) and in case it has been stolen to make it more easy to find the current location. Architecture Let’s assume two different architectures. One possibility is to have the bicycle actively acquire the position and then try to push this information to a server (“active push”). Another approach is to have fixed installed scanning stations or users to scan/report bicycles (“passive pull”). Both lead to very different designs. Active Push The system would need some sort of GNSS module, a microcontroller or some full blown SoC to run Linux, an accelerator meter and maybe more sensors. It should somehow fit into an average bicycle frame, have good antennas to work from inside the frame, last/work for the lifetime of a bicycle and most importantly a way to bridge the air-gap from the bicycle to the server. Push architecture   Passive Pull The device would not know its position or if it is moved. It might be a simple barcode/QR code/NFC/iBeacon/etc. In case of a barcode it could be the serial number of the frame and some owner/registration information. In case of NFC it should be a randomized serial number (if possible to increase privacy). Users would need to scan the barcode/QR-code and an application would annotate the found bicycle with the current location (cell towers, wifi networks, WGS 84 coordinate) and upload it to the server. For NFC the smartphone might be able to scan the tag and one can try to put readers at busy locations. The incentive for the app user is to feel good collecting points for scanning bicycles, maybe some rewards if a stolen bicycle is found. Buyers could easily check bicycles if they were reported as stolen (not considering the difficulty of how to establish ownership). Pull architecture   Technology requirements The technologies that come to my mind are Barcode, QR-Code, play some humanly not hearable noise and decode in an app, NFC, ZigBee, 6LoWPAN, Bluetooth, Bluetooth Smart, GSM, UMTS, LTE, NB-IOT. Next I will look at the main differentiation/constraints of these technologies and provide a small explanation and finish how these constraints interact with each other.  World wide usable Radio Technology operates on a specific set of radio frequencies (Bands). Each country may manage these frequencies separately and this can lead to having to use the same technology on different bands depending on the current country. This will increase the complexity of the antenna design (or require multiple of them), make mechanical design more complex, makes software testing more difficult, production testing, etc. Or there might be multiple users/technologies on the same band (e.g. wifi + bluetooth or just too many wifis). Power consumption Each radio technology requires to broadcast and might require to listen or permanently monitor the air for incoming messages (“paging”). With NFC the scanner might be able to power the device but for other technologies this is unlikely to be true. One will need to define the lifetime of the device and the size of the battery or look into ways of replacing/recycling batteries or to charge them. Range Different technologies were designed to work with sender/receiver being away at different min/max. distances (and speeds but that is not relevant for the lock nor is the bandwidth for our application). E.g. with Near Field Communication (NFC) the workable range is meters while with GSM it will be many kilometers and with UMTS the cell size depends on how many phones are currently using it (the cell is breathing). Pick two of three Ideally we want something that works over long distances, requires no battery to send/receive and the system is still pushing out the position/acceleration/event report to servers. Sadly this is not how reality works and we will have to set priorities. The more bands to support, the more complicated the antenna design, production, calibration, testing. It might be that one technology does not work in all countries or that it is not equally popular or the market situation is different, e.g. some cities have city wide public hotspots, some don’t. Higher power transmission increases the range but increases the power consumption even more. More current will be used during transmission which requires a better hardware design to buffer the spikes, a bigger battery and ultimately a way to charge or efficiently replace batteries.Given these constraints it is time to explore some technologies. I will use the one already mentioned at the beginning of this section. Technologies Technology Bands Global coverage Range Battery needed Scan Device needed Cost of device Arch. Comment Barcode/QR-Code Optical Yes Centimeters No App scanning barcode required extremely low Pull Sticker needs to be hard to remove and visible, maybe embedded to the frame Play audio Non human hearable audio Yes Centimeters Yes App recording audio moderate Pull Button to play audio? NFC 13.56 Mhz Yes Centimeters No Yes extremely low Pull Privacy issues RFID Many Yes, but not on single band Centimeters to meters Yes Receiver required low Pull Many bands, specific readers needed Bluetooth LE 2.4 Ghz Yes Meters Yes Yes, but common low Pull/Push Competes with Wifi for spectrum ZigBee Multiple Yes, but not on single band Meters Yes Yes mid Push Not commonly deployed, software more involved 6LoWPAN Like ZigBee Like ZigBee Meters Yes Yes low Push Uses ZigBee physical layer and then IPv6. Requires 6LoWPAN to Internet translation GSM 800/900, 1800/1900 Almost besides South Korea, Japan, some islands Kilometers Yes No moderate Push Almost global coverage, direct communication with backend possible UMTS Many Less than GSM but South Korea, Japan Meters to Kilometers depends on usage Yes No high Push Higher power usage than GSM, higher device cost LTE Many Less than GSM Designed for kilometers Yes No high Push Expensive, higher power consumption NB-IOT (LTE) Many Not deployed Kilometers Yes No high Push Not deployed and coming in the future. Can embed GSM equally well into a LTE carrier Conclusion Both a push and pull architecture seem to be feasible and create different challenges and possibilities. A pull architecture will require at least Smartphone App support and maybe a custom receiver device. It will only work in regions with lots of users and making privacy/tracking more difficult is something to solve. For push technology using GSM is a good approach. If coverage in South Korea or Japan is required a mix of GSM/UMTS might be an option. NB-IOT seems nice but right now it is not deployed and it is not clear if a module will require less power than a GSM module. NB-IOT might only be in the interest of basestation vendors (the future will tell). Using GSM/UMTS brings its own set of problems on the device side but that is for other posts. [Less]
Posted almost 9 years ago
Many of us deal or will deal with (connected) M2M/IoT devices. This might be writing firmware for microcontrollers, using a RTOS like NuttX or a full blown Unix (like) operating system like FreeBSD or Yocto/Poky Linux, creating and building code to ... [More] run on the device, processing data in the backend or somewhere inbetween. Many of these devices will have sensors to collect data like GNSS position/time, temperature, light detector, measuring acceleration, see airplanes, detect lightnings, etc.The backend problem is work but mostly “solved”. One can rely on something like Amazon IoT or creating a powerful infrastructure using many of the FOSS options for message routing, data storage, indexing and retrieval in C++. In this post I want to focus about the little detail of how data can go from the device to the backend. Image from Wikipedia (CC BY-SA 3.0 by Herzi Pinki) To make this thought experiment a bit more real let’s imagine we want to build a bicycle lock/tracker. Many of my colleagues ride their bicycle to work and bikes being stolen remains a big tragedy. So the primary focus of an IoT device would be to prevent theft (make other bikes a more easy target) or making selling a stolen bicycle more difficult (e.g. by easily checking if something has been stolen) and in case it has been stolen to make it more easy to find the current location. Architecture Let’s assume two different architectures. One possibility is to have the bicycle actively acquire the position and then try to push this information to a server (“active push”). Another approach is to have fixed installed scanning stations or users to scan/report bicycles (“passive pull”). Both lead to very different designs. Active Push The system would need some sort of GNSS module, a microcontroller or some full blown SoC to run Linux, an accelerator meter and maybe more sensors. It should somehow fit into an average bicycle frame, have good antennas to work from inside the frame, last/work for the lifetime of a bicycle and most importantly a way to bridge the air-gap from the bicycle to the server. Push architecture   Passive Pull The device would not know its position or if it is moved. It might be a simple barcode/QR code/NFC/iBeacon/etc. In case of a barcode it could be the serial number of the frame and some owner/registration information. In case of NFC it should be a randomized serial number (if possible to increase privacy). Users would need to scan the barcode/QR-code and an application would annotate the found bicycle with the current location (cell towers, wifi networks, WGS 84 coordinate) and upload it to the server. For NFC the smartphone might be able to scan the tag and one can try to put readers at busy locations. The incentive for the app user is to feel good collecting points for scanning bicycles, maybe some rewards if a stolen bicycle is found. Buyers could easily check bicycles if they were reported as stolen (not considering the difficulty of how to establish ownership). Pull architecture   Technology requirements The technologies that come to my mind are Barcode, QR-Code, play some humanly not hearable noise and decode in an app, NFC, ZigBee, 6LoWPAN, Bluetooth, Bluetooth Smart, GSM, UMTS, LTE, NB-IOT. Next I will look at the main differentiation/constraints of these technologies and provide a small explanation and finish how these constraints interact with each other.  World wide usable Radio Technology operates on a specific set of radio frequencies (Bands). Each country may manage these frequencies separately and this can lead to having to use the same technology on different bands depending on the current country. This will increase the complexity of the antenna design (or require multiple of them), make mechanical design more complex, makes software testing more difficult, production testing, etc. Or there might be multiple users/technologies on the same band (e.g. wifi + bluetooth or just too many wifis). Power consumption Each radio technology requires to broadcast and might require to listen or permanently monitor the air for incoming messages (“paging”). With NFC the scanner might be able to power the device but for other technologies this is unlikely to be true. One will need to define the lifetime of the device and the size of the battery or look into ways of replacing/recycling batteries or to charge them. Range Different technologies were designed to work with sender/receiver being away at different min/max. distances (and speeds but that is not relevant for the lock nor is the bandwidth for our application). E.g. with Near Field Communication (NFC) the workable range is meters while with GSM it will be many kilometers and with UMTS the cell size depends on how many phones are currently using it (the cell is breathing). Pick two of three Ideally we want something that works over long distances, requires no battery to send/receive and the system is still pushing out the position/acceleration/event report to servers. Sadly this is not how reality works and we will have to set priorities. The more bands to support, the more complicated the antenna design, production, calibration, testing. It might be that one technology does not work in all countries or that it is not equally popular or the market situation is different, e.g. some cities have city wide public hotspots, some don’t. Higher power transmission increases the range but increases the power consumption even more. More current will be used during transmission which requires a better hardware design to buffer the spikes, a bigger battery and ultimately a way to charge or efficiently replace batteries.Given these constraints it is time to explore some technologies. I will use the one already mentioned at the beginning of this section. Technologies Technology Bands Global coverage Range Battery needed Scan Device needed Cost of device Arch. Comment Barcode/QR-Code Optical Yes Centimeters No App scanning barcode required extremely low Pull Sticker needs to be hard to remove and visible, maybe embedded to the frame Play audio Non human hearable audio Yes Centimeters Yes App recording audio moderate Pull Button to play audio? NFC 13.56 Mhz Yes Centimeters No Yes extremely low Pull Privacy issues RFID Many Yes, but not on single band Centimeters to meters Yes Receiver required low Pull Many bands, specific readers needed Bluetooth LE 2.4 Ghz Yes Meters Yes Yes, but common low Pull/Push Competes with Wifi for spectrum ZigBee Multiple Yes, but not on single band Meters Yes Yes mid Push Not commonly deployed, software more involved 6LoWPAN Like ZigBee Like ZigBee Meters Yes Yes low Push Uses ZigBee physical layer and then IPv6. Requires 6LoWPAN to Internet translation GSM 800/900, 1800/1900 Almost besides South Korea, Japan, some islands Kilometers Yes No moderate Push Almost global coverage, direct communication with backend possible UMTS Many Less than GSM but South Korea, Japan Meters to Kilometers depends on usage Yes No high Push Higher power usage than GSM, higher device cost LTE Many Less than GSM Designed for kilometers Yes No high Push Expensive, higher power consumption NB-IOT (LTE) Many Not deployed Kilometers Yes No high Push Not deployed and coming in the future. Can embed GSM equally well into a LTE carrier Conclusion Both a push and pull architecture seem to be feasible and create different challenges and possibilities. A pull architecture will require at least Smartphone App support and maybe a custom receiver device. It will only work in regions with lots of users and making privacy/tracking more difficult is something to solve. For push technology using GSM is a good approach. If coverage in South Korea or Japan is required a mix of GSM/UMTS might be an option. NB-IOT seems nice but right now it is not deployed and it is not clear if a module will require less power than a GSM module. NB-IOT might only be in the interest of basestation vendors (the future will tell). Using GSM/UMTS brings its own set of problems on the device side but that is for other posts. [Less]
Posted almost 9 years ago
As part of running infrastructure it might make sense or be required to store logs of transactions. A good way might be to capture the raw unmodified network traffic. For our GSM backend this is what we (have) to do and I wrote a client that is ... [More] using libpcap to capture data and sends it to a central server for storing the trace. The system is rather simple and in production at various customers. The benefit of having a central server is having access to a lot of storage without granting too many systems and users access, central log rotation and compression, an easy way to grab all relevant traces and many more. Recently the topic of doing real-time processing of captured data came up. I wanted to add some kind of side-channel that distributes data to interested clients before writing it to the disk. E.g. one might analyze a RTP audio flow for packet loss, jitter, without actually storing the personal conversation. I didn’t create a custom protocol but decided to try ØMQ (Zeromq). It has many built-in strategies (publish / subscribe, round robin routing, pipeline, request / reply, proxying, …) for connecting distributed system. The framework abstracts DNS resolving, connect, re-connect and exposes very easy to build the standard message exchange patterns. I opted for the publish / subscribe pattern because the collector server (acting as publisher) does not care if anyone is consuming the events or data. The message I sent are quite simple as well. There are two kind of multi-part messages, one for events and one for data. A subscriber is able to easily filter for events or data and filter for a specific capture source. The support for Zeromq was added in two commits. The first one adds basic zeromq context/socket support and configuration and the second adds sending out the events and data in a fire and forget manner. And in a simple test set-up it seems to work just fine. Since moving to Amsterdam I try to attend more meetups. Recently I went to talk at the local Elasticsearch group and found out about packetbeat. It is program written in Go that is using a PCAP library to capture network traffic, has protocol decoders written in go to make IP re-assembly and decoding and will upload the extracted information to an instance of Elasticsearch.  In principle it is somewhere between my PCAP system and a distributed wireshark (without the same amount of protocol decoders). In our network we wouldn’t want the edge systems to directly talk to the Elasticsearch system and I wouldn’t want to run decoders as root (or at least with extended capabilities). As an exercise to learn a bit more about the Go language I tried to modify packetbeat to consume trace data from my new data interface. The result can be found here and I do understand (though I am still hooked on Smalltalk/Pharo) why a lot of people like Go. The built-in fetching of dependencies from github is very neat, the module and interface/implementation approach is easy to comprehend and powerful. The result of my work allows something like in the picture below. First we centralize traffic capturing at the pcap collector and then have packetbeat pick-up data, decode and forward for analysis into Elasticsearch. Let’s see if upstream is merging my changes.   [Less]
Posted almost 9 years ago
As part of running infrastructure it might make sense or be required to store logs of transactions. A good way might be to capture the raw unmodified network traffic. For our GSM backend this is what we (have) to do and I wrote a client that is ... [More] using libpcap to capture data and sends it to a central server for storing the trace. The system is rather simple and in production at various customers. The benefit of having a central server is having access to a lot of storage without granting too many systems and users access, central log rotation and compression, an easy way to grab all relevant traces and many more. Recently the topic of doing real-time processing of captured data came up. I wanted to add some kind of side-channel that distributes data to interested clients before writing it to the disk. E.g. one might analyze a RTP audio flow for packet loss, jitter, without actually storing the personal conversation. I didn’t create a custom protocol but decided to try ØMQ (Zeromq). It has many built-in strategies (publish / subscribe, round robin routing, pipeline, request / reply, proxying, …) for connecting distributed system. The framework abstracts DNS resolving, connect, re-connect and exposes very easy to build the standard message exchange patterns. I opted for the publish / subscribe pattern because the collector server (acting as publisher) does not care if anyone is consuming the events or data. The message I sent are quite simple as well. There are two kind of multi-part messages, one for events and one for data. A subscriber is able to easily filter for events or data and filter for a specific capture source. The support for Zeromq was added in two commits. The first one adds basic zeromq context/socket support and configuration and the second adds sending out the events and data in a fire and forget manner. And in a simple test set-up it seems to work just fine. Since moving to Amsterdam I try to attend more meetups. Recently I went to talk at the local Elasticsearch group and found out about packetbeat. It is program written in Go that is using a PCAP library to capture network traffic, has protocol decoders written in go to make IP re-assembly and decoding and will upload the extracted information to an instance of Elasticsearch.  In principle it is somewhere between my PCAP system and a distributed wireshark (without the same amount of protocol decoders). In our network we wouldn’t want the edge systems to directly talk to the Elasticsearch system and I wouldn’t want to run decoders as root (or at least with extended capabilities). As an exercise to learn a bit more about the Go language I tried to modify packetbeat to consume trace data from my new data interface. The result can be found here and I do understand (though I am still hooked on Smalltalk/Pharo) why a lot of people like Go. The built-in fetching of dependencies from github is very neat, the module and interface/implementation approach is easy to comprehend and powerful. The result of my work allows something like in the picture below. First we centralize traffic capturing at the pcap collector and then have packetbeat pick-up data, decode and forward for analysis into Elasticsearch. Let’s see if upstream is merging my changes.   [Less]
Posted almost 9 years ago
This is part of a series of blog posts about testing inside the OpenBSC/Osmocom project. In this post I am focusing on our usage of GNU autotest. The GNU autoconf ships with a not well known piece of software. It is called GNU autotest and we will ... [More] focus about it in this blog post. GNU autotest is a very simple framework/test runner. One needs to define a testsuite and this testsuite will launch test applications and record the exit code, stdout and stderr of the test application. It can diff the output with expected one and fail if it is not matching. Like any of the GNU autotools a log file is kept about the execution of each test. This tool can be nicely integrated with automake’s make check and make distcheck. This will execute the testsuite and in case of a test failure fail the build. The way we use it is also quite simple as well. We create a simple application inside the test/testname directory and most of the time just capture the output on stdout. Currently no unit-testing framework is used, instead a simple application is built that is mostly using OSMO_ASSERT to assert the expectations. In case of a failure the application will abort and print a backtrace. This means that in case of a failure the stdout will not not be as expected and the exit code will be wrong as well and the testcase will be marked as FAILED. The following will go through the details of enabling autotest in a project. Enabling GNU autotest The configure.ac file needs to get a line like this: AC_CONFIG_TESTDIR(tests). It needs to be put after the AC_INIT and AM_INIT_AUTOMAKE directives and make sure AC_OUTPUT lists tests/atlocal.  Integrating with the automake The next thing is to define a testsuite inside the tests/Makefile.am. This is some boilerplate code that creates the testsuite and makes sure it is invoked as part of the build process. # The `:;' works around a Bash 3.2 bug when the output is not writeable. $(srcdir)/package.m4: $(top_srcdir)/configure.ac :;{ echo '# Signature of the current package.' && echo 'm4_define([AT_PACKAGE_NAME],' && echo ' [$(PACKAGE_NAME)])' &&; echo 'm4_define([AT_PACKAGE_TARNAME],' && echo ' [$(PACKAGE_TARNAME)])' && echo 'm4_define([AT_PACKAGE_VERSION],' && echo ' [$(PACKAGE_VERSION)])' && echo 'm4_define([AT_PACKAGE_STRING],' && echo ' [$(PACKAGE_STRING)])' && echo 'm4_define([AT_PACKAGE_BUGREPORT],' && echo ' [$(PACKAGE_BUGREPORT)])'; echo 'm4_define([AT_PACKAGE_URL],' && echo ' [$(PACKAGE_URL)])'; } &>'$(srcdir)/package.m4' EXTRA_DIST = testsuite.at $(srcdir)/package.m4 $(TESTSUITE) TESTSUITE = $(srcdir)/testsuite DISTCLEANFILES = atconfig check-local: atconfig $(TESTSUITE) $(SHELL) '$(TESTSUITE)' $(TESTSUITEFLAGS) installcheck-local: atconfig $(TESTSUITE) $(SHELL) '$(TESTSUITE)' AUTOTEST_PATH='$(bindir)' $(TESTSUITEFLAGS) clean-local: test ! -f '$(TESTSUITE)' || $(SHELL) '$(TESTSUITE)' --clean AUTOM4TE = $(SHELL) $(top_srcdir)/missing --run autom4te AUTOTEST = $(AUTOM4TE) --language=autotest $(TESTSUITE): $(srcdir)/testsuite.at $(srcdir)/package.m4 $(AUTOTEST) -I '$(srcdir)' -o [email protected] [email protected] mv [email protected] $@ Defining a testsuite The next part is to define which tests will be executed. One needs to create a testsuite.at file with content like the one below: AT_INIT AT_BANNER([Regression tests.]) AT_SETUP([gsm0408]) AT_KEYWORDS([gsm0408]) cat $abs_srcdir/gsm0408/gsm0408_test.ok > expout AT_CHECK([$abs_top_builddir/tests/gsm0408/gsm0408_test], [], [expout], [ignore]) AT_CLEANUP This will initialize the testsuite, create a banner. The lines between AT_SETUP and AT_CLEANUP represent one testcase. In there we are copying the expected output from the source directory into a file called expout and then inside the AT_CHECK directive we specify what to execute and what to do with the output. Executing a testsuite and dealing with failure The testsuite will be automatically executed as part of make check and make distcheck. It can also be manually executed by entering the test directory and executing the following. $ make testsuite make: `testsuite' is up to date. $ ./testsuite ## ---------------------------------- ## ## openbsc 0.13.0.60-1249 test suite. ## ## ---------------------------------- ## Regression tests. 1: gsm0408 ok 2: db ok 3: channel ok 4: mgcp ok 5: gprs ok 6: bsc-nat ok 7: bsc-nat-trie ok 8: si ok 9: abis ok ## ------------- ## ## Test results. ## ## ------------- ## All 9 tests were successful. In case of a failure the following information will be printed and can be inspected to understand why things went wrong. ... 2: db FAILED (testsuite.at:13) ... ## ------------- ## ## Test results. ## ## ------------- ## ERROR: All 9 tests were run, 1 failed unexpectedly. ## -------------------------- ## ## testsuite.log was created. ## ## -------------------------- ## Please send `tests/testsuite.log' and all information you think might help: To: Subject: [openbsc 0.13.0.60-1249] testsuite: 2 failed You may investigate any problem if you feel able to do so, in which case the test suite provides a good starting point. Its output may be found below `tests/testsuite.dir'. You can go to tests/testsuite.dir and have a look at the failing tests. For each failing test there will be one directory that contains a log file about the run and the output of the application. We are using GNU autotest in libosmocore, libosmo-abis, libosmo-sccp, OpenBSC, osmo-bts and cellmgr_ng. [Less]
Posted almost 9 years ago
Last year Jacob and me worked on the osmo-sgsn of OpenBSC. We have improved the stability and reliability of the system and moved it to the next level. By adding the GSUP interface we are able to connect it to our commercial grade Smalltalk MAP ... [More] stack and use it in the real world production GSM network. While working and manually testing this stack we have not used our osmo-pcu software but another proprietary IP based BTS, after all we didn’t want to debug the PCU issues right now. This year Jacob has taken over as a maintainer of the osmo-pcu, he started with a frequent crash fix (which was introduced due us understanding the specification on TBF re-use better but not the code), he has spent hours and hours reading the specification, studied the log output and has fixed defect after defect and then moved to features. We have tried the software at this years Camp and fixed another round of reliability issues. Some weeks ago I noticed that the proprietary IP based BTS has been moved from the desk into the shelf. In contrast to the proprietary BTS, issues has a real possibility to be resolved. It might take a long time, it might take one paying another entity to do it but in the end your system will run better. Free Software allows you to genuinely own and use the hardware you have bought! [Less]
Posted almost 9 years ago
The classic question in IT is to buy something existing or to build it from scratch. When wanting to buy an off the shelves HLR (that actually works) in most cases the customer will end up in a vendor lock-in: The vendor might enforce to run on a ... [More] hardware sold by your vendor. This might just be a dell box with a custom front, or really custom hardware in a custom chasis or even requiring you to put an entire rack. Either way you are trapped to a single supplier. It might come with a yearly license (or support fee) and on top of that might be dongled so after a reboot, the service might not start as the new license key has not been copied. The system might not export a configuration interface for what you want. Specially small MVNOs might have specific needs for roaming steering, multi IMSI and you will be sure to pay a premium for these features (even if they are off the shelves extensions). There might be a design flaw in the protocol and you would like to mitigate but the vendor will try to charge a premium from you because the vendor can. The alternative is to build a component from scratch and the initial progress will be great as the technology is more manageable than many years ago. You will test against the live SS7 network, maybe even encode messages by hand and things appear to work but only then the fun will start. How big is your test suite? Do you have tests for ITU Q787? How will you do load-balancing, database failover? How do you track failures and performance? For many engineering companies this is a bit over their head (one needs to know GSM MAP, need to know ITU SCCP, SIGTRAN, ASN1, TCAP). But there is a third way and it is available today. Look for a Free Software HLR and give it a try. Check which features are missing and which you want and develop them yourself or ask a company like sysmocom to implement them for you. Once you move the system into production maybe find a support agreement that allows the company to continuously improve the software and responds to you quickly. The benefits for anyone looking for a HLR are obvious: You can run the component on any Linux/FreeBSD system. On physical hardware, on virtualized hardware, together with other services, not with other services. You decide. The software will always be yours. Once you have a running system, there will be nothing (besides time_t overflowing) that has been designed to fail (no license key expires) Independence of a single supplier. You can build a local team to maintain the software, you can find another supplier to maintain it. Built for change. Having access to the source code enables you to modify it, with a Free Software license you are allowed to run your modified versions as well. The only danger is to make sure to not fall in the OpenCore trap surrounded by many OpenSource projects. Make sure that all you need is available in source and allows you to run modified copies. [Less]
Posted almost 9 years ago
Some Free Software projects have already moved to Github, some probably plan it and the Python project will move soon. I have not followed the reasons for why the Python project is moving but there is a long list of reasons to move to a platform ... [More] like github.com. They seem to have a good uptime, offer checkouts through ssh, git, http (good for corporate firewalls) and a subversion interface, they have integrated wiki and ticket management, the fork feature allows an upstream to discover what is being done to the software, the pull requests and the integration with third party providers is great. The last item allows many nice things, specially integrating with a ton of Continuous Integration tools (Travis, Semaphore, Circle, who knows). Not everything is great though. As a Free Software project one might decide that using proprietary javascript to develop and interact with a Free Software project is not acceptable, one might want to control the repository yourself and then people look for alternatives. At the Osmocom project we are using cgit, mailinglists, patchwork, trac, host our own jenkins and then mirror some of our repositories to github.com for easy access. Another is to find a platform like Github but that is Free and a lot of people look or point to gitlab.com. From a freedom point of view I think Gitlab is a lot worse than Github. They try to create the illusion that this is a Free Software alternative to Github.com, they offer to host your project but if you want to have the same features for self hosting you will notice that you fell for their marketing. Their website prominently states “Runs GitLab Enterprise” Edition. If you have a look at the feature comparison between the “Community Edition” (the Free Software project) and their open core additions (Enterprise edition) you will notice that many of the extra features are essential. So when deciding putting your project on github.com or gitlab.com the question is not between proprietary and Free Software but essentially between proprietary and proprietary and as such there is no difference. [Less]
Posted almost 9 years ago
The classic question in IT is to buy something existing or to build it from scratch. When wanting to buy an off the shelves HLR (that actually works) in most cases the customer will end up in a vendor lock-in: The vendor might enforce to run on a ... [More] hardware sold by your vendor. This might just be a dell box with a custom front, or really custom hardware in a custom chasis or even requiring you to put an entire rack. Either way you are trapped to a single supplier. It might come with a yearly license (or support fee) and on top of that might be dongled so after a reboot, the service might not start as the new license key has not been copied. The system might not export a configuration interface for what you want. Specially small MVNOs might have specific needs for roaming steering, multi IMSI and you will be sure to pay a premium for these features (even if they are off the shelves extensions). There might be a design flaw in the protocol and you would like to mitigate but the vendor will try to charge a premium from you because the vendor can. The alternative is to build a component from scratch and the initial progress will be great as the technology is more manageable than many years ago. You will test against the live SS7 network, maybe even encode messages by hand and things appear to work but only then the fun will start. How big is your test suite? Do you have tests for ITU Q787? How will you do load-balancing, database failover? How do you track failures and performance? For many engineering companies this is a bit over their head (one needs to know GSM MAP, need to know ITU SCCP, SIGTRAN, ASN1, TCAP). But there is a third way and it is available today. Look for a Free Software HLR and give it a try. Check which features are missing and which you want and develop them yourself or ask a company like sysmocom to implement them for you. Once you move the system into production maybe find a support agreement that allows the company to continuously improve the software and responds to you quickly. The benefits for anyone looking for a HLR are obvious: You can run the component on any Linux/FreeBSD system. On physical hardware, on virtualized hardware, together with other services, not with other services. You decide. The software will always be yours. Once you have a running system, there will be nothing (besides time_t overflowing) that has been designed to fail (no license key expires) Independence of a single supplier. You can build a local team to maintain the software, you can find another supplier to maintain it. Built for change. Having access to the source code enables you to modify it, with a Free Software license you are allowed to run your modified versions as well. The only danger is to make sure to not fall in the OpenCore trap surrounded by many OpenSource projects. Make sure that all you need is available in source and allows you to run modified copies. [Less]