Open Source For You - September 2014
Open Source For You - September 2014
Open Source For You - September 2014
30 Understanding the
Document Object Model
(DOM) in Mozilla
40 Introducing AngularJS
48 An Introduction to Device
52
Drivers in the Linux Kernel
Admin
59 Use Pound on RHEL to
Balance the Load on Web
Servers
67 Boost the Performance of
CloudStack with Varnish
74 Use Wireshark to 63 Why We Need to Handle Bounced Emails
Detect ARP Spoofing
77 Make Your Own PBX with Asterisk REGULAR FEATURES
Open Gurus 08 You Said It... 25 Editorial Calendar
80 How to Make Your USB Boot
09 Offers of the Month 100 Tips & Tricks
with Multiple ISOs
86 Contiki OS Connecting
10 New Products 105 FOSS Jobs
Microcontrollers to the 13 FOSSBytes
Internet of Things
www.esds.co.in www.cloudoye.com
Get 10%
discount
35%
off & more
Do not wait! Be a part of
Reseller package special offer ! the winning team
Free Dedicated hosting/VPS for one Get 35% off on course fees and if you appear
month. Subscribe for annual package for two Red Hat exams, the second shot is free
of Dedicated hosting/VPS and get
Hurry!till 30th one month FREE Hurry!till 30th
alid alid Contact us @ 98409 82184/85 or
Offer vmber 2014! Contact us at 09841073179 Offer vmber 2014! Write to [email protected]
Septe or Write to [email protected] Septe
www.space2host.com www.vectratech.in
www.goforhosting.com www.prox.packwebhosting.com
Omar on +91-995 888 1862 or Date: 20-21 Sept 2014 ( 2 days program)
Faculty: Mr. Babu Krishnamurthy
011-26810601/02/03 or COURSE Visiting Faculty / CDAC/ ACTS with 18 years
FEE: of Industry and Faculty Experience
Write to [email protected] RS.5620/-
(all inclusive)
Contact us at +91-98453-65845 or
Write to [email protected]
www.opensourceforu.com
FOSSBYTES Powered by www.efytimes.com
Users have had some trouble using the 5th Annual Datacenter The event aims to assist the community in Praveen Nair; Email: Praveen.nair@
popular Shutter screenshot tool for Linux Dynamics Converged; the datacentre domain by exchanging ideas, datacenterdynamics.com; Ph: +91
December 9, 2014; Riyadh accessing market knowledge and launching 9820003158; Website:
owing to the many irritating bugs and new initiatives. http://www.datacenterdynamics.com/
stability issues that came along. But they are
Hostingconindia This event will be attended by Web hosting Website:
in for a pleasant surprise as developers have December 12-13, 2014; companies, Web design companies, domain http://www.hostingcon.com/
now released a new bug fix for the tool that NCPA, Jamshedji Bhabha and hosting resellers, ISPs and SMBs from contact-us/
Theatre, Mumbai across the world.
aims to address some of its more prominent
issues. The new bug fixShutter 0.92is
now available for download for the Linux According to Sousa, the Shopping Lens implementation contravened a
platform and a number of stability issues 1995 EU Directive on the protection of users personal data. Sousa had provided
have been dealt with for good. a number of instances to put forward his point. Initially, Sousa began by reaching
out to Canonical for clarification but to no avail. He was finally forced to file a
Open source community irked complaint with the Information Commissioners Office regarding his security
by broken Linux kernel patches concerns. Finally, the ICO responded to Sousas need for clarification by clearly
One of the many fine threads that bind the stating that the Shopping Lens feature complies with the DPA (Data Protection Act)
open source community is avid participation very well and in no way breaches users privacy.
and cooperation between developers across
the globe, with the common goal of improving Oracle launches Solaris 11.2 with OpenStack support
the Linux kernel. However, not everyone is Oracle Corp recently launched the latest
actually trying to help out there, as recent version of its Solaris enterprise UNIX
happenings suggest. Trolls exist even in the platform: Solaris 11.2. Notably, this new
Linux community, and one that has managed version was in beta since April. The
to make a big impression is Nick Krause. latest release comes with several key
Krauses recent antics have led to significant enhancementsthe support for OpenStack
bouts of frustration among Linux kernel as well as software-defined networking
maintainers. Krause continuously tries to get (SDN). Additionally, there are various
broken patches past the maintainersonly security, performance and compliance
his goals are not very clear at the moment. enhancements introduced in Oracles
Many developers believe that Krause aims to new release. Solaris 11.2 comes with OpenStack integration, which is perhaps its
damage the Linux kernel. While that might most crucial enhancement. The latest version runs the most recent version of the
be a distant dream for him (at least for now), popular toolbox for building clouds: OpenStack Havana. Meanwhile, the inclusion
he has managed to irk quite a lot of people, of software-defined networking (SDN) support is seen as Oracles ongoing effort to
slowing down the whole development process transform its Exalogic Elastic Cloud into one-stop data centres. Until now, Exalogic
because of the need to keep fixing broken boxes were being increasingly used in the form of massive servers or for transaction
patches introduced by him. processing. They were therefore not fulfilling their real purpose, which is to work
Web browser and OpenOfficethe two joys of the open source world. The local Khronos releases OpenGL NG
government has argued that a large amount of money is spent on buying licences in The Khronos Group recently announced
case of proprietary software, wasting a lot of the local tax payers money. Therefore, the release of the latest iteration of
a decision to drop Microsoft in favour of cost-effective open source alternatives OpenGL (the oldest high-level 3D
seems to be a viable option. graphics API still in popular use).
Although OpenGL 4.5 is a noteworthy
LibreOffice coming to Android release in its own right, the Groups
LibreOffice needs no introduction. The Document Foundations popular open second major release in the next
source office suite is widely used by millions of people across the globe. Therefore, generation OpenGL initiative is garnering
news that the suite could soon be widespread appreciation. While OpenGL
launched on Android is something to 4.5 is what some might call a fairly
watch out for. You heard that right! A standard annual OpenGL update, OpenGL
new report by Tech Republic suggests NG is a complete rebuild of the OpenGL
that the Document Foundation is API, designed with the idea of building an
currently on a rigorous workout to entirely new version of OpenGL. This new
make this happen. However, as things version will have a significantly reduced
stand, there is still some time before that happens for real. Even as the Document overhead owing to the removal of a lot
Foundation came out with the first Release Candidate (RC) version of the upcoming of abstraction. Also, it will do away with
LibreOffice 4.2.5 recently (it has been quite consistent in updating its stable version the major inefficiencies of older versions
on a timely basis), work is on to make LibreOffice available for Googles much when working at a low level with the bare
loved Android platform as well, the report says. The buzz is that developers back metal GPU hardware.
home are currently talking about (and working at) getting the file size right, that is, Being a very high-level API, earlier
something well below the Google limit. Until they are able to do that, LibreOffice versions of OpenGL made it hard to
for Android is a distant dream, sadly. efficiently run code on the GPU directly.
However, as and when this happens, LibreOffice would be in direct competition While this didnt matter so much earlier,
with Google Docs. Since there is a genuine need for Open Document Format (ODF) now things have changed. Fuelled by
support in Android, the release might just be what the doctor ordered for many users. more mature GPUs, developers today
This is more of a rumour at the moment, and things will get clearer in time. There is tend to ask for graphics APIs that allow
no official word from either Google or the Document Foundation about this, but we them to get much closer to the bare
will keep you posted on developments. The recent release the LibreOffice 4.2.5 metal. The next generation OpenGL
RC1meanwhile tries to curb many key bugs that plagued the last 4.2.4 final release. initiative is directed at developers who
This, in turn, has improved its usability and stability to a significant extent. are looking to improve performance and
reduce overhead.
RHEL 6.6 beta is released; draws major inspiration from RHEL 7
Just so RHEL 6.x users (who wish to continue with this branch of the distribution for Dropboxs updated Android
a bit longer) dont feel left out, Red Hat has launched a beta release of its Red Hat App offers improved features
Enterprise Linux 6.6 (RHEL 6.6) platform. Taking much of its inspiration from the A major update has been announced
recently released RHEL 7, the move is directed towards RHEL 6.x users so that they by Dropbox in connection with its
benefit from new platform features. At the same time, it comes with some real cool official Android app, and is available
features that are quite independent of RHEL 7 and which make 6.6 beta stand out at Google Play. This new update
on its own merits. Red Hat offers Application Binary Interface (ABI) compatibility carries version number 2.4.3 and
for RHEL for a period of ten years, so technically speaking, it cannot drastically comes with a lot of improved features.
change major elements of an in-production release. Quite simply put, it cant and As the Google Play listing suggests,
wont change an in-production release in a way that could alter stability or existing this new Dropbox version supports in-
compatibility. This would eventually mean that the new release on offer cannot go app previews of Word, PowerPoint and
much against the tide with respect to RHEL 6. Although the feature list for RHEL PDF files. A better search experience is
6.6 beta ties in closely with the feature list of the major release (6.0), it doesnt also offered in this new version, which
mean RHEL 6.6 beta is simply old wine served in a new bottle. It does manage to enables tracking of recent queries, and
introduce some key improvements for RHEL 6.x users. To begin with, RHEL 6.6 suggestions are also displayed. One
beta includes some features that were first introduced with RHEL 7, the most notable can also search in specific folders from
being Performance Co-Pilot (PCP). The new beta release will also offer RHEL 6.x now onwards.
users more integrated Remote Direct Memory Access (RDMA) capabilities.
Motherboards
The Lifeline of Your Desktop
If you are a gamer, or like to customise your PC and build it from scratch, the motherboard is
what you require to link all the important and key components together. Lets find out how to
select the best desktop motherboards.
T
he central processing unit (CPU) can be considered to CPU socket
be the brain of a system or a PC in laymans language, The central processing unit is the key component of a motherboard
but it still needs a nervous system to be connected and its performance is primarilydetermined by the kind of
with all the other components in your PC. A motherboard processor it is designed to hold. The CPU socket can be defined
plays this role, as all the components are attached to it and as an electrical component that connects or attaches to the
to each other with the help of this board. It can be defined motherboard and is designed to house a microprocessor. So,
as a PCB (printed circuit board) that has the capability of when youre buying a motherboard, you should look for a CPU
expanding. As the name suggests, a motherboard is believed socket that is compatible with the CPU you have planned to use.
to be the mother of all the components attached in it, Most of the time, motherboards use one of the following five
including network cards, sound cards, hard drives, TV tuner sockets -- LGA1155, LGA2011, AM3, AM3+ and FM1. Some
cards, slots, etc. It holds the most significant sub-systems of the sockets are backward compatible and some of the chips
the processor along with other important components. A are interchangeable. Once you opt for a motherboard, you will be
motherboard is found in all electronics devices like TVs, limited to using the processors that offer similar specifications.
washing machines and other embedded systems. Since it
provides the electrical connections through which other Form factor
components are connected and linked with each other, it needs A motherboards capabilities are broadly determined by its
the most attention. It hosts other devices and subsystems and shape, size and how much it can be expanded these aspects
also contains the central processing unit, unlike the backplane. are known as form factors. Although there is no fixed design or
There are quite a lot of companies that deal with form for motherboards, and they are available in many variations,
motherboards and Simmtronics is one among the leading players. two form factors have always been the favourites -- ATX and
According to Dr Inderjeet Sabbrawal, chairman, Simmtronics, microATX. The ATX motherboard measures around 305cm
Simmtronics has been one of the exclusive manufacturers of x 23cm (12 inch x 9 inch) and offers the highest number of
motherboards in the hardware industry over the last 20 years. We expansion slots, RAM bays and data connectors. MicroATX
strongly believe in creativity, innovation and R&D. Currently, we motherboards measure 24.38cm x 24.38cm (9.6 x 9.6 inch) and
are fulfilling our commitment to provide the latest mainstream have fewer expansion slots, RAM bays and other components.
motherboards. At Simmtronics, the quality of the motherboards The form factor of a motherboard can be decided according to
is strictly controlled. At present, the market is not growing. what purpose the motherboard is expected to serve.
India still has a varied market for older generation models as well
as the latest models of motherboards. RAM bays
Random access memory (RAM) is considered the most important
Factors to consider while buying a motherboard workspace in a motherboard, where data is processed even after
In a desktop, several essential units and components being removed from the hard disk drive or solid state drive. The
are attached directly to the motherboard, such as the efficiency of your PC directly depends on the speed and size of your
microprocessor, main memory, etc. Other components, such RAM. The more space you have on your RAM, the more efficient
as the external storage controllers for sound and video display your computing will be. But its no use having a RAM with greater
and various peripheral devices, are attached to it through efficiency than your motherboard can support, as that will be just a
slots, plug-in cards or cables. There are a number of factors to waste of the extra potential. Neither can you have RAM with lesser
keep in mind while buying a motherboard, and these depend efficiency than the motherboard, as then the PC will not work well
on the specific requirements. Linux is slowly taking over the due to the bottlenecks caused by mismatched capabilities. Choosing
PC world and, hence, people now look for Linux-supported the motherboard which supports just the right RAM is vital.
motherboards. As a result, almost every motherboard now Apart from these factors, there are many others to consider before
supports Linux. The many factors to keep in mind when selecting a motherboard. These include the audio system, display,
buying a Linux-supported motherboard are discussed below. LAN support, expansion capabilities and peripheral interfaces.
Asus: Z87-K
motherboard
Supported CPU: Fourth generation Intel Core
i7 processor, Intel Core i5 processor and other
Intel processors
Memory supported: Dual channel memory
architecture supports Intel XMP
Form factor: ATX form factor
Gigabyte Technology:
GA-Z87X-OC motherboard
CPU supported: Fourth generation Intel Core i7
processor, Intel Core i5 processor and other Intel
processors
Memory supported: Supports DDR3 3000
Form factor: MicroATX
F
or the past few months, we have been discussing The software design phase also produces a number of
information retrieval and natural language processing, SE artifacts such as the design document, design models
as well as the algorithms associated with them. This in the form of UML documents, etc, which also can be
month, we continue our discussion on natural language mined for information. Design documents can be analysed
processing (NLP) and look at how NLP can be applied to generate automatic test cases in order to test the final
in the field of software engineering. Given one or many product. During the development and maintenance phases,
text documents, NLP techniques can be applied to extract a number of textual artifacts are generated. Source code
information from the text documents. The software itself can be considered as a textual document. Apart from
engineering (SE) lifecycle gives rise to a number of textual source code, source code control system logs such as SVN/
documents, to which NLP can be applied. GIT logs, Bugzilla defect reports, developers mailing lists,
So what are the software artifacts that arise in SE? field reports, crash reports, etc, are the various SE artifacts to
During the requirements phase, a requirements document which text mining can be applied.
is an important textual artifact. This specifies the expected Various types of text analysis techniques can be applied
behaviour of the software product being designed, in terms to SE artifacts. One popular method is duplicate or similar
of its functionality, user interface, performance, etc. It is document detection. This technique can be applied to
important that the requirements being specified are clear find out duplicate bug reports in bug tracking systems. A
and unambiguous, since during product delivery, customers variation of this technique can be applied to code clones
would like to confirm that the delivered product meets all and copy-and-paste snippets.
their specified requirements. Automatic summarisation is another popular technique
Having vague ambiguous requirements can hamper in NLP. These techniques try to generate a summary of a
requirement verification. So text analysis techniques can given document by looking for the key points contained in it.
be applied to the requirements document to determine There are two approaches to automatic summarisation. One
whether there are any ambiguous or vague statements. is known as extractive summarisation, using which key
For instance, consider a statement like, Servicing of user phrases and sentences in the given document are extracted
requests should be fast, and request waiting time should and put back together to provide a summary of the document.
be low. This statement is ambiguous since it is not clear The other is the abstractive summarisation technique, which
what exactly the customers expectations of fast service is used to build an internal semantic representation of the
or low waiting time may be. NLP tools can detect such given document, from which key concepts are extracted, and
ambiguous requirements. It is also important that there are a summary generated using natural language understanding.
no logical inconsistencies in the requirements. For instance, The abstractive summarisation technique is close to how
a requirement that Login names should allow a maximum humans would summarise a given document. Typically, we
of 16 characters, and that The login database will have a would proceed by building a knowledge representation of
field for login names which is 8 characters wide, conflict the document in our minds and then using our own words
with each other. While the user interface allows up to a to provide a summary of the key concepts. Abstractive
maximum of 16 characters, the backend login database summarisation is obviously more complex than extractive
will support fewer characters, which is inconsistent with summarisation, but yields better summaries.
the earlier requirement. Though currently such inconsistent Coming to SE artifacts, automatic summarisation
requirements are flagged by human inspection, it is possible techniques can be applied to generate large bug reports.
to design text analysis tools to detect them. They can also be applied to generate high level comments
F
edora 20 makes it easy to install Hadoop. Version 2.2 $ sudo systemctl start hadoop-namenode hadoop-datanode \
is packaged and available in the standard repositories. hadoop-nodemanager hadoop-resourcemanager
It will place the configuration files in /etc/hadoop,
with reasonable defaults so that you can get started easily. As You can find out the hdfs directories created as
you may expect, managing the various Hadoop services is follows. The command may look complex, but you are
integrated with systemd. running the hadoop fs command in a shell as Hadoop's
internal user, hdfs:
Setting up a single node
First, start an instance, with name h-mstr, in OpenStack $ sudo runuser hdfs -s /bin/bash /bin/bash -c hadoop fs -ls
using a Fedora Cloud image (http://fedoraproject. /
org/get-fedora#clouds). You may get an IP like Found 3 items
192.168.32.2. You will need to choose at least the drwxrwxrwt - hdfs supergroup 0 2014-07-15 13:21 /tmp
m1.small flavour, i.e., 2GB RAM and 20GB disk. Add drwxr-xr-x - hdfs supergroup 0 2014-07-15 14:18 /user
an entry in /etc/hosts for convenience: drwxr-xr-x - hdfs supergroup 0 2014-07-15 13:22 /var
Now, modify the configuration files located in /etc/hadoop. [fedora@h-mstr hadoop]$ cat slaves
Edit core-site.xml and modify the value of fs.default.name h-slv1
by replacing localhost by h-mstr: h-slv2
Launch instances h-slv1 and h-slv2 serially using $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
Hadoop-Base as the instance boot source. Launching of the -mkdir /user/fedora"
first instance from a snapshot is pretty slow. In case the IP $ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
addresses are not the same as your guess in /etc/hosts, edit / -chown fedora /user/fedora"
etc/hosts on each of the three nodes to the correct value. For
your convenience, you may want to make entries for h-slv1 You can run the same tests again. Although you are using
and h-slv2 on the desktop /etc/hosts file as well. three nodes, the improvement in the performance compared to
The following commands should be run from Fedora on the single node is not expected to be noticeable as the nodes
h-mstr. Reformat the namenode to make sure that the single are running on a single desktop.
node tests are not causing any unexpected issues: The pi example took about one minute on the three nodes,
compared to the 90 seconds taken earlier. Terasort took 7
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop minutes instead of 8.
namenode -format"
Start the hadoop services on h-mstr. Note: I used an AMD Phenom II X4 965 with 16GB
$ sudo systemctl start hadoop-namenode hadoop-datanode RAM to arrive at the timings. All virtual machines and their
hadoop-nodemanager hadoop-resourcemanager data were on a single physical disk.
Start the datanode and yarn services on the slave nodes: Both OpenStack and Mapreduce are a collection of
interrelated services working together. Diagnosing problems,
$ ssh -t fedora@h-slv1 sudo systemctl start hadoop-datanode especially in the beginning, is tough as each service has its
hadoop-nodemanager own log files. It takes a while to get used to realising where to
$ ssh -t fedora@h-slv2 sudo systemctl start hadoop-datanode look. However, once these are working, it is incredible how
hadoop-nodemanager easy they make distributed processing!
Create the hdfs directories and a directory for user fedora By: Dr Anil Seth
as on a single node: The author has earned the right to do what interests him.
You can find him online at http://sethanil.com, http://sethanil.
blogspot.com, and reach him via email at [email protected]
$ sudo hdfs-create-dirs
July 2014 Firewall and Network security Web Hosting Solutions Providers MFD Printers for SMEs
August 2014 Kernel Development Big Data solution Providers SSDs for Servers
November 2014 Cloud Special Virtualisation Solutions Providers Network Switches and Routers
January 2015 Programming Languages IT Consultancy Service Providers Laser Printers for SMEs
February 2015 Top 10 of Everything on Open Source Storage Solutions Providers Wireless Routers
H
ave you ever wondered which module is slowing b) For Fedora systems:
down your Python program and how to optimise
it? Well, there are profilers that can come to sudo yum install -y mercurial python python3 python-pip
your rescue.
Profiling, in simple terms, is the analysis of a program
Note: 1. I have used the y argument to
to measure the memory used by a certain module,
automatically install the packages after being tracked by
frequency and duration of function calls, and the time the yum installer.
complexity of the same. Such profiling tools are termed 2. Mac users can use Homebrew to install these packages.
profilers. This article will discuss the line_profiler for
Python. Cython is a pre-requisite because the source releases
require a C compiler. If the Cython package is not found or is
Installation too old in your current Linux distribution version, install it by
Installing pre-requisites: Before installing line_profiler running the following command in a terminal:
make sure you install these pre-requisites:
a) For Ubuntu/Debian-based systems (recent versions): sudo pip install Cython
sudo apt-get install mercurial python python3 python-pip Note: Mac OS X users can install Cython using pip.
python3-pip Cython Cython3
hg clone https://bitbucket.org/
robertkern/line_profiler
Running line_profiler: Once the slow module is Line # Hits Time Per Hit % Time Line Contents
profiled, the next step is to run the line_profiler, which =============================================================
will give line-by-line computation of the code within the 211 @profile
profiled function. 212 def _connect_view(self):
Open a terminal, navigate to the folder where the .py file 213 4 205 51.2 32.7 vadjustment =
is located and type the following command: self.view.get_vadjustment()
214 4 98 24.5 15.6 self._
kernprof.py -l example.py; python3 -m line_profilerexample. adjustmentValueId =
py.lprof vadjustment.connect(
215 4 79 19.8 12.6 'value-changed',
Wrote profile results to gnome-music.lprof to the total amount of recorded time spent in the function.
Timer unit: 1e-06 s Line content: It displays the actual source code.
File: ./gnomemusic/view.py Note: If you make changes in the source code you
Function: _connect_view at line 211 need to run the kernprof and line_profiler again in order to
Total time: 0.000466 s profile the updated code and get the latest results.
This article is an introduction to the DOM programming interface and the DOM inspector,
which is a tool that can be used to inspect and edit the live DOM of any Web document or
XUL application.
T
he Document Object Model (DOM) is a programming objects. For example, the document object that represents the
interface for HTML and XML documents. It provides document itself, the tableObject that implements the special
a structured representation of a document and it HTMLTableElement DOM interface to access the HTML
defines a way that the structure can be accessed from the tables, and so forth.
programs so that they can change the document structure,
style and content. The DOM provides a representation of the Why is DOM important?
document as a structured group of nodes and objects that have Dynamic HTML (DHTML) is a term used by some vendors
properties and methods. Essentially, it connects Web pages to to describe the combination of HTML, style sheets and
scripts or programming languages. scripts that allow documents to be animated. The W3C DOM
A Web page is a document that can either be displayed in working group is aiming to make sure interoperable and
the browser window or as an HTML source that is in the same language-neutral solutions are agreed upon.
document. The DOM provides another way to represent, store As Mozilla claims the title of Web Application Platform,
and manipulate that same document. In simple terms, we can support for the DOM is one of the most requested features; in
say that the DOM is a fully object-oriented representation of a fact, it is a necessity if Mozilla wants to be a viable alternative
Web page, which can be modified by any scripting language. to the other browsers. The user interface of Mozilla (also
The W3C DOM standard forms the basis of the DOM Firefox and Thunderbird) is built using XUL and the DOM to
implementation in most modern browsers. Many browsers manipulate its own user interface.
offer extensions beyond the W3C standard.
All the properties, methods and events available for How do I access the DOM?
manipulating and creating the Web pages are organised into You dont have to do anything special to begin using the
There are three ways of inspecting any document, which of the DOM inspector to find and inspect the nodes you
are described below. are interested in. One of the biggest and most immediate
Inspecting content documents: The Inspect Content advantages that this brings to your Web and application
Document menu popup can be accessed from the File menu, development is that it makes it possible to find the mark-up
and it will list the currently loaded content documents. In and the nodes in which the interesting parts of a page or a
the Firefox and SeaMonkey browsers, these will be the piece of the user interface are defined.
Web pages you have opened in tabs. For Thunderbird and One common use of the DOM inspector is to find the
SeaMonkey Mail and News, any messages youre viewing name and location of a particular icon being used in the
will be listed here.
Inspecting Chrome documents: The Inspect Chrome
Document menu popup can be accessed from the File menu,
EMBEDDED SOFTWARE
and it will contain the list of currently loaded Chrome
windows and sub-documents. A browser window and the
DOM inspector are likely to already be open and displayed
in this list. The DOM inspector keeps track of all the
T
DEVELOPMENPS
windows that are open, so to inspect the DOM of a particular COURSES AND WORKSHO
window in the DOM inspector, simply access that window
as you would normally do and then choose its title from this
dynamically updated menu list.
Inspecting arbitrary URLs: We can also inspect the
DOM of arbitrary URLs by using the Inspect a URL menu Embedded RTOS -ARCHITECTURE, INTERNALS
item in the File menu, or by just entering a URL into the AND PROGRAMMING - ON ARM PLATFORM
DOM inspectors address bar and clicking Inspect or pressing FACULTY : Babu Krishnamurthy
(Visiting Faculty, CDAC/ACTS - with 18 years of Industry
Enter. We should not use this approach to inspect Chrome and Faculty Experience)
documents, but instead ensure that the Chrome document
AUDIENCE : BE/BTECH Students, PG Diploma Students,
loads normally, and use the Inspect Chrome Document menu ME/MTECH Students and Embedded / sw Engineers
popup to inspect the document. DATES : 20-09-2014 and 21-09-2014 (2 Days Program)
When you inspect a Web page by this method, a browser VENUE: School of Embedded Software Development,
M.V. Creators' Wing,
pane at the bottom of the DOM inspector window will open 3rd Floor, #218, Sunshine Complex, Kammanahalli,
up, displaying the Web page. This allows you to use the DOM 4th Main, 2nd Block, HRBR Layout, Kalyan Nagar,
inspector without having to use a separate browser window, Bangalore - 560043.
(Opposite to HDFC Bank, Next to FoodWorld
or without embedding a browser in your application at all. If and near JalaVayu Vihar )
you find that the browser pane takes up too much space, you Email : [email protected]
may close it, but you will not be able to visually observe any Phone : 080-41207855
SMS : +91-9845365845 ( leave a message and we will call you back )
of the consequences of your actions.
UPCOMING COURSES :
RTOS - BSP AND DRIVER DEVELOPMENT, REAL-TIME LINUX DEVELOPMENT
DOM inspector viewers LINUX INTERNALS AND DEVICE DRIVERS - FUNDAMENTALS AND
You can use the DOM nodes viewer in the document pane LINUX INTERNALS AND DEVICE DRIVERS - ADVANCED
user interface, which is not an easy task otherwise. If youre Selecting elements by clicking: A powerful interactive
inspecting a Chrome document, as you select nodes in the feature of the DOM inspector is that when you have it open
DOM nodes viewer, the rendered versions of those nodes are and have enabled this functionality by choosing Edit >
highlighted in the user interface itself. Note that there are Select Element by Click (or by clicking the little magnifying
bugs that prevent the flasher from the DOM inspector APIs glass icon in the upper left portion of the DOM Inspector
that are working currently on certain platforms. application), you can click anywhere in a loaded Web page or
If you inspect the main browser window, for example, the Inspect Chrome document. The element you click will be
and select nodes in the DOM nodes viewer, you will see the shown in the document pane in the DOM nodes viewer and
various parts of the browser interface being highlighted with the information will be displayed in the object pane.
a blinking red border. You can traverse the structure and go Searching for nodes in the DOM: Another way to inspect
from the topmost parts of the DOM tree to lower level nodes, the DOM is to search for particular elements youre interested in
such as the search-go-button icon that lets users perform a by ID, class or attribute. When you select Edit > Find Nodes...
query using the selected search engine. or press Ctrl + F, the DOM inspector displays a Find dialogue
The list of viewers available from the viewer menu gives that lets you find elements in various ways, and that gives you
you some idea about how extensive the DOM inspectors incremental searching by way of the <F3> shortcut key.
capabilities are. The following descriptions provide an Updating the DOM, dynamically: Another feature
overview of these viewers capabilities: worth mentioning is the ability the DOM inspector gives
1. The DOM nodes viewer shows attributes of nodes that can you to dynamically update information reflected in the DOM
take them, or the text content of text nodes, comments and about Web pages, the user interface and other elements. Note
processing instructions. The attributes and text contents that when the DOM inspector displays information about a
may also be edited. particular node or sub-tree, it presents individual nodes and
2. The Box Model viewer gives various metrics about XUL their values in an active list. You can perform actions on the
and HTML elements, including placement and size. individual items in this list from the Context menu and the
3. The XBL Bindings viewer lists the XBL bindings attached Edit menu, both of which contain menu items that allow you
to elements. If a binding extends to another binding, the to edit the values of those attributes.
binding menu list will list them in descending order to This interactivity allows you to shrink and grow the
root binding. element size, change icons, and do other layout-tweaking
4. The CSS Rules viewer shows the CSS rules that updatesall without actually changing the DOM as it is
are applied to the node. Alternatively, when used in defined in the file on disk.
conjunction with the Style Sheets viewer, the CSS Rules
viewer lists all recognised rules from that style sheet. References
Properties may also be edited. Rules applying to pseudo- [1] https://developer.mozilla.org/en-US/docs/Web/API/
elements do not appear. Document_Object_Model
5. This viewer gives a hierarchical tree of the object panes [2] https://developer.mozilla.org/en/docs/Web/API/Document
subject. The JavaScript Object viewer also allows
JavaScript to be evaluated by selecting the appropriate By: Anup Allamsetty
menu item in the context menu. The author is an active contributor to Mozilla and GNOME. He
Three basic actions of DOM node viewers are blogs at https://anup07.wordpress.com/ and you can email him
at [email protected].
described below.
EB Times
Electronics Trade Channel Updates
An EFY Group publication
is Becoming Regional
Get East, West, North & South Editions at you
doorstep. Write to us at [email protected] and get EB
Times regularly
This monthly B2B Newspaper is a resource for traders, distributors, dealers, and those
who head channel business, as it aims to give an impetus to channel sales
Experimenting with
More Functions in Haskell
We continue our exploration of the open source, advanced and purely functional
programming language, Haskell. In the third article in the series, we will focus on more
Haskell functions, conditional constructs and their usage.
A
function in Haskell has the function name *Main> 3 `elem` [1, 2, 3]
followed by arguments. An infix operator function True
has operands on either side of it. A simple infix *Main> 4 `elem` [1, 2, 3]
add operation is shown below: False
If you wish to convert an infix function to a prefix diffTen :: Integer -> Integer
function, it must be enclosed within parentheses: diffTen = (10 -)
*Main> (+) 3 5 Loading the file in GHCi and passing three as an argument yields:
8
*Main> diffTen 3
Similarly, if you wish to convert a prefix function 7
into an infix function, you must enclose the function
name within backquotes(`). The elem function takes an Haskell exhibits polymorphism. A type variable in a function
element and a list, and returns true if the element is a is said to be polymorphic if it can take any type. Consider the last
member of the list: function that returns the last element in an array. Its type signature is:
*Main> :t last Similarly, the if and else constructs must be neatly aligned.
last :: [a] -> a The else statement is mandatory in Haskell. For example:
The a in the above snippet refers to a type variable and sign :: Integer -> String
can represent any type. Thus, the last function can operate on a sign x =
list of integers or characters (string): if x > 0
then "Positive"
*Main> last [1, 2, 3, 4, 5] else
5 if x < 0
*Main> last "Hello, World" then "Negative"
'd' else "Zero"
You can use a where clause for local definitions inside a Running the example with GHCi, you get:
function, as shown in the following example, to compute the
area of a circle: *Main> sign 0
"Zero"
areaOfCircle :: Float -> Float *Main> sign 1
areaOfCircle radius = pi * radius * radius "Positive"
where pi = 3.1415 *Main> sign (-1)
"Negative"
Loading it in GHCi and computing the area for radius
1 gives: The case construct can be used for pattern matching
against possible expression values. It needs to be combined
*Main> areaOfCircle 1 with the of keyword. The different values need to be aligned
3.1415 and the resulting action must be specified after the ->
symbol for each case. For example:
You can also use the let expression with the in statement to
compute the area of a circle: sign :: Integer -> String
sign x =
areaOfCircle :: Float -> Float case compare x 0 of
areaOfCircle radius = let pi = 3.1415 in pi * radius * radius LT -> "Negative"
GT -> "Positive"
Executing the above with input radius 1 gives: EQ -> "Zero"
*Main> sign 0 The way the fold is evaluated among the two types is
"Zero" different and is demonstrated below:
*Main> sign 3
"Positive" *Main> foldl (+) 0 [1, 2, 3]
*Main> sign (-3) 6
"Negative" *Main> foldl (+) 1 [2, 3]
6
There are three very important higher order functions in *Main> foldl (+) 3 [3]
Haskell map, filter and fold. 6
The map function takes a function and a list, and applies
the function to each and every element of the list. Its type It can be represented as f (f (f a b1) b2) b3 where f is the
signature is: function, a is the accumulator value, and b1, b2 and b3
are the elements of the list. The parenthesis is accumulated on
*Main> :t map the left for a left fold. The computation looks like this:
map :: (a -> b) -> [a] -> [b]
*Main> (+) ((+) ((+) 0 1) 2) 3
The first function argument accepts an element of type a 6
and returns an element of type b. An example of adding two *Main> (+) 0 1
to every element in a list can be implemented using map: 1
*Main> (+) ((+) 0 1) 2
*Main> map (+ 2) [1, 2, 3, 4, 5] 3
[3,4,5,6,7] *Main> (+) ((+) ((+) 0 1) 2) 3
6
The filter function accepts a predicate function for
evaluation, and a list, and returns the list with those elements With the recursion, the expression is constructed and
that satisfy the predicate. For example: evaluated only when it is finally formed. It can thus cause
stack overflow or never complete when working with infinite
*Main> filter (> 0) [-2, -1, 0, 1, 2] lists. The foldr evaluation looks like this:
[1,2]
*Main> foldr (+) 0 [1, 2, 3]
Its type signature is: 6
*Main> foldr (+) 0 [1, 2] + 3
filter :: (a -> Bool) -> [a] -> [a] 6
*Main> foldr (+) 0 [1] + 2 + 3
The predicate function for filter takes as its first argument 6
an element of type a and returns True or False.
The fold function performs a cumulative operation on a It can be represented as f b1 (f b2 (f b3 a)) where f is the
list. It takes as arguments a function, an accumulator (starting function, a is the accumulator value, and b1, b2 and b3
with an initial value) and a list. It cumulatively aggregates the are the elements of the list. The computation looks like this:
computation of the function on the accumulator value as well
as each member of the list. There are two types of folds left *Main> (+) 1 ((+) 2 ((+) 3 0))
and right fold. 6
*Main> (+) 3 0
*Main> foldl (+) 0 [1, 2, 3, 4, 5] 3
15 *Main> (+) 2 ((+) 3 0)
*Main> foldr (+) 0 [1, 2, 3, 4, 5] 5
15 *Main> (+) 1 ((+) 2 ((+) 3 0))
6
Their type signatures are, respectively: To be continued on page.... 44
Introducing
AngularJS
AngularJS is an open source Web application framework maintained by Google and the
community, which helps to build Single Page Applications (SPA). Lets get to know it better.
A
ngularJS can be introduced as a front-end the Hello World program in minutes. With the help of
framework capable of incorporating the Angular, the combined power of HTML and JavaScript can
dynamicity of JavaScript with HTML. The self- be put to maximum use. One of the prominent features of
proclaimed super heroic JavaScript MVW (Model View Angular is that it is extremely easy to test. And that makes
Whatever) framework is maintained by Google and many it very suitable for creating large-scale applications. Also,
other developers at Github. This open source framework the Angular community, comprising Googles developers
works its magic on Web applications of the Single Page primarily, is very active in the development process.
Applications (SPA) category. The logic behind an SPA is Google Trends gives assuring proof of Angulars future in
that an initial page is loaded at the start of an application the field of Web development (Figure 1).
from the server. When an action is performed, the
application fetches the required resources from the server Core features
and adds them to the initial page. The key point here is Before getting into the basics of AngularJS, you need to
that an SPA just makes one server round trip, providing understand two key termstemplates and models. The
you with the initial page. This makes your applications HTML page that is rendered out to you is pretty much the
very responsive. template. So basically, your template has HTML, Angular
entities (directives, filters, model variables, etc) and CSS (if
Why AngularJS? necessary). The example code given below for data binding
AngularJS brings out the beauty in Web development. is a template.
It is extremely simple to understand and code. If youre In an SPA, the data and presentation of data is separated
familiar with HTML and JavaScript, you can write by a model layer that handles data and a view layer that reads
Topics Subscribe
the purpose of some common directives.
ngApp:This directive bootstraps your angular
angularjs emberjs knockoutjs backbonejs + Add term
search term search term search term search term application and considers the HTML element in which the
attribute is specified to be the root element of Angular.
Interest over time News headlines Forecast
In the above example, the entire HTML page becomes an
angular application, since the ng-app attribute is given
March 2009
angularjs:0 to the <html> tag. If it was given to the <body> tag,
emberjs:0
knockoutjs:0 the body alone becomes the root element. Or you could
backbonejs:0
create your own Angular module and let that be the root
of your application. An AngularJS module might consist
Average 2005 2007 2009 2011 2013
of controllers, services, directives, etc. To create a new
module, use the following commands:
Figure 1: Report from Google Trends
var moduleName = angular.module( moduleName , [ ] );
from models. This helps an SPA in redrawing any part of the // The array is a list of modules our module depends on
UI without requiring a server round trip to retrieve HTML.
When the data is updated, its view is notified and the altered Also, remember to initialise your ng-app attribute to
data is produced in the view. moduleName. For instance,
<html ng-app > Here, the model sometext is bound (two-way) to the
<head> view. The double curly braces will notify Angular to put the
<script src="http://ajax.googleapis.com/ajax/libs/ value of sometext in its place.
angularjs/1.0.7/angular.min.js"> ngClick: How this directive functions is similar to that of
</script> the onclick event of Javascript.
</head>
<body ng-init = yourtext = Data binding is cool! > <button ng-click="mul = mul * 2" ng-init="mul = 1"> Multiply
Enter your text: <input type="text" ng-model = with 2 </button>
"yourtext" /> After multiplying : {{mul}}
<strong>You entered :</strong> {{yourtext}}
</body> Whenever the button is clicked, mul gets multiplied by
</html> two.
<body ng-init=" people=[{name:'Tony',branch:'CSE'}, html and animals.html are examples of partials; these are
{name:'Santhosh', branch:'EEE'}, files that will be loaded to your view, depending on the
{name:'Manisha', branch:'ECE'}];"> URL passed. For example, you could have an app that has
Name: <input type="text" ng-model="name"/> icons and whenever the icon is clicked, a link is passed.
<li ng-repeat="person in people | filter: name"> {{person. Depending on the link, the corresponding partial is loaded to
name }} - {{person.branch}} the view. This is how you pass links:
</li>
</body> <a href='#/home'><img src='partials/home.jpg' /></a>
<a href='#/animal'><img src='partials/animals.jpg' /></a>
Advanced features
Controllers: To bring some more action to our app, we Dont forget to add the ng-view attribute to the HTML
need controllers. These are JavaScript functions that add component of your choice. That component will act as a
behaviour to our app. Lets make use of the ngController placeholder for your views.
directive to bind the controller to the DOM:
<div ng-view=""></div>
<body ng-controller="ContactController">
<input type="text" ng-model="name"/> Services: According to the official documentation of
<button ng-click="disp()">Alert !</button> AngularJS, Angular services are substitutable objects
<script type="text/javascript"> that are wired together using dependency injection (DI).
function ContactController($scope) { You can use services to organise and share code across
$scope.disp = function( ){ your app. With DI, every component will receive
alert("Hey " + $scope.name); a reference to the service. Angular provides useful
}; services like $http, $window, and $location. In order to
} use these services in controllers, you can add them as
</script> dependencies. As in:
</body>
var testapp = angular.module( testapp, [ ] );
One term to be explained here is $scope. To quote testapp.controller ( testcont, function( $window ) {
from the developer guide: Scope is an object that //body of controller
refers to the application model. With the help of scope, });
the model variables can be initialised and accessed.
In the above example, when the button is clicked the
disp( ) comes into play, i.e., the scope is assigned with To define a custom service, write the following:
a behaviour. Inside disp( ), the model variable name is
accessed using scope. testapp.factory ('serviceName', function( ) {
Views and routes: In any usual application, we var obj;
navigate to different pages. In an SPA, instead of pages, we return obj; // returned object will be injected to
have views. So, you can use views to load different parts the component
of your application. Switching to different views is done //that has called the service
through routing. For routing, we make use of the ngRoute });
and ngView directives:
Testing
var miniApp = angular.module( 'miniApp', ['ngRoute'] ); Testing is done to correct your code on-the-go and avoid
ending up with a pile of errors on completing your apps
miniApp.config(function( $routeProvider ){ development. Testing can get complicated when your
$routeProvider.when( '/home', { templateUrl: app grows in size and APIs start to get tangled up, but
'partials/home.html' } ); Angular has got its own defined testing schemes. Usually,
$routeProvider.when( '/animal', {templateUrl: two kinds of testing are employed, unit and end-to-end
'partials/animals.html' } ); testing (E2E). Unit testing is used to test individual API
$routeProvider.otherwise( { redirectTo: '/home' } ); components, while in E2E testing, the working of a set of
}); components is tested.
The usual components of unit testing are describe( ),
ngRoute enables routing in applications and beforeEach( ) and it( ). You have to load the angular module
$routeProvider is used to configure the routes. home. before testing and beforeEach( ) does this. Also, this function
There are some statements like condition checking highest precedence and is right-associative. For example:
where f b1 can be computed even without requiring the
subsequent arguments, and hence the foldr function can *Main> (reverse ((++) "yrruC " (unwords ["skoorB",
work with infinite lists. There is also a strict version of "lleksaH"])))
foldl (foldl) that forces the computation before proceeding "Haskell Brooks Curry"
with the recursion.
If you want a reference to a matched pattern, you can use You can rewrite the above using the function application
the as pattern syntax. The tail function accepts an input list operator that is right-associative:
and returns everything except the head of the list. You can
write a tailString function that accepts a string as input and Prelude> reverse $ (++) "yrruC " $ unwords ["skoorB",
returns the string with the first character removed: "lleksaH"]
"Haskell Brooks Curry"
tailString :: String -> String
tailString "" = "" You can also use the dot notation to make it even more
tailString input@(x:xs) = "Tail of " ++ input ++ " is " ++ xs readable, but the final argument needs to be evaluated first;
hence, you need to use the function application operator for it:
The entire matched pattern is represented by input in the
above code snippet. *Main> reverse . (++) "yrruC " . unwords $ ["skoorB",
Functions can be chained to create other functions. This is "lleksaH"]
called composing functions. The mathematical definition is "Haskell Brooks Curry"
as under:
(f o g)(x) = f(g(x))
Use Bugzilla
to Manage Defects in Software
In the quest for excellence in software products, developers have to go through the process of
defect management. The tool of choice for defect containment is Mozilla's Bugzilla. Learn how to
install, configure and use it to file a bug report and act on it.
I
n any project, defect management and various types of them are on your Linux system before proceeding with the
testing play key roles in ensuring quality. Defects need installation. This specific installation covers MySQL as the
to be logged, tracked and closed to ensure the project backend database.
meets quality expectations. Generating defect trends also
helps project managers to take informed decisions and make Step 2: User and database creation
the appropriate course corrections while the project is being Before proceeding with the installation, the user and database
executed. Bugzilla is one of the most popular open source need to be created by following the steps mentioned below.
defect management tools and helps project managers to track The names used here for the database or the users are
the complete lifecycle of a defect. specific to this installation, which can change between
installations.
Installation and configuration of Bugzilla Start the service by issuing the following command:
'password';
> GRANT ALL PRIVILEGES ON *. * TO 'bugzilla'@'localhost'; Figure 2: Bugzilla main page
> FLUSH PRIVILEGES;
mysql > CREATE DATABASE bugzilla_db CHARACTER SET utf8;
> GRANT SELECT,INSERT,UPDATE,DELETE,INDEX,ALTER,CREATE,DROP,
REFERENCES ON bugzilla_db.* TO 'bugzilla'@'localhost'
IDENTIFIED BY 'cspasswd';
> FLUSH PRIVILEGES; Figure 3: Defect lifecycle
> QUIT
An Introduction to
Device Drivers in the Linux Kernel
In the article An Introduction to the Linux Kernel in the August 2014 issue of OSFY, we wrote and
compiled a kernel module. In the second article in this series, we move on to device drivers.
H
ave you ever wondered how a computer Device: This can be the actual device present at the
plays audio or shows video? The hardware level, or a pseudo device.
answer is: by using device drivers. Let us take an example where a user-space
A few years ago we would always install application sends data to a character device.
audio or video drivers after installing MS Instead of using an actual device we are going to
Windows XP. Only then we were able use a pseudo device. As the name suggests, this
to listen the audio. Let us explore device device is not a physical device. In GNU/Linux /
drivers in this column. dev/null is the most commonly used pseudo
A device driver (often referred to as device. This device accepts any kind of data
driver') is a piece of software that controls (i.e., input) and simply discards it. And it
a particular type of device which is doesn't produce any output.
connected to the computer system. Let us send some data to the /dev/null
It provides a software interface to pseudo device:
the hardware device, and enables
access to the operating system [mickey]$ echo -n 'a' > /dev/null
and other applications. There are
various types of drivers present In the above example, echo is a user-
in GNU/Linux such as Character, space application and null is a special
Block, Network and USB file present in the /dev directory. There
drivers. In this column, is a null driver present in the kernel to
we will explore only control the pseudo device.
character drivers. To send or receive data to and
Character drivers from the device or application,
are the most common use the corresponding device
drivers. They provide file that is connected to the driver
unbuffered, direct access to hardware through the Virtual File System (VFS)
devices. One can think of character drivers as a layer. Whenever an application wants to perform any
long sequence of bytes -- same as regular files but can be operation on the actual device, it performs this on the
accessed only in sequential order. Character drivers support device file. The VFS layer redirects those operations to
at least the open(), close(), read() and write() operations. The the appropriate functions that are implemented inside the
text console, i.e., /dev/console, serial consoles /dev/stty*, and driver. This means that whenever an application performs
audio/video drivers fall under this category. the open() operation on a device file, in reality the open()
To make a device usable there must be a driver present function from the driver is invoked, and the same concept
for it. So let us understand how an application accesses data applies to the other functions. The implementation of these
from a device with the help of a driver. We will discuss the operations is device-specific.
following four major entities.
User-space application: This can be any simple utility Major and minor numbers
like echo, or any complex application. We have seen that the echo command directly sends data to
Device file: This is a special file that provides an interface the device file. Hence, it is clear that to send or receive data to
for the driver. It is present in the file system as an ordinary and from the device, the application uses special device files.
file. The application can perform all supported operation on But how does communication between the device file and the
it, just like for an ordinary file. It can move, copy, delete, driver take place? It happens via a pair of numbers referred to
rename, read and write these device files. as major and minor numbers.
Device driver: This is the software interface for the device The command below lists the major and minor numbers
and resides in the kernel space. associated with a character device file:
[bash]$ ls -l /dev/full /dev/null /dev/random /dev/zero This function registers a major number for character
crw-rw-rw- 1 root root 1, 7 Jul 11 20:47 /dev/full devices. Arguments of this function are self-explanatory. The
crw-rw-rw- 1 root root 1, 3 Jul 11 20:47 /dev/null major argument implies the major number of interest, name
crw-rw-rw- 1 root root 1, 8 Jul 11 20:47 /dev/random is the name of the driver and appears in the /proc/devices area
crw-rw-rw- 1 root root 1, 5 Jul 11 20:47 /dev/zero and, finally, fops is the pointer to the file_operations structure.
Certain major numbers are reserved for special drivers;
The kernel uses the dev_t type to store major and minor hence, one should exclude those and use dynamically allocated
numbers. dev_t type is defined in the <linux/types.h> header major numbers. To allocate a major number dynamically, provide
file. Given below is the representation of dev_t type from the the value zero to the first argument, i.e., major == 0. This
header file: function will dynamically allocate and return a major number.
To deallocate an allocated major number use the
#ifndef _LINUX_TYPES_H unregister_chrdev() function. The prototype is given below
#define _LINUX_TYPES_H and the parameters of the function are self-explanatory:
#include <linux/module.h>
#include <linux/kernel.h> static void __exit null_exit(void)
#include <linux/fs.h> {
#include <linux/kdev_t.h> unregister_chrdev(major, name);
printk(KERN_INFO "Device unregistered successfully.\n");
static int major; }
static char *name = "null_driver";
module_init(null_init);
static int null_open(struct inode *i, struct file *f) module_exit(null_exit);
{
printk(KERN_INFO "Calling: %s\n", __func__); MODULE_AUTHOR("Narendra Kangralkar.");
return 0; MODULE_LICENSE("GPL");
} MODULE_DESCRIPTION("Null driver");
static int null_release(struct inode *i, struct file *f) Our driver code is ready. Let us compile and insert the
{ module. In the article last month, we did learn how to write
printk(KERN_INFO "Calling: %s\n", __func__); Makefile for kernel modules.
return 0;
} [mickey]$ make
static ssize_t null_read(struct file *f, char __user *buf, [root]# insmod ./null_driver.ko
size_t len, loff_t *off)
{ We are now going to create a device file for our driver.
printk(KERN_INFO "Calling: %s\n", __func__); But for this we need a major number, and we know that
return 0; our driver's register_chrdev() function will allocate the
} major number dynamically. Let us find out this dynamically
allocated major number from /proc/devices, which shows the
static ssize_t null_write(struct file *f, const char __user currently loaded kernel modules:
*buf, size_t len, loff_t *off)
{ [root]# grep "null_driver" /proc/devices
printk(KERN_INFO "Calling: %s\n", __func__); 248 null_driver
return len;
} From the above output, we are going to use 248 as a
major number for our driver. We are only interested in the
static struct file_operations null_ops = major number, and the minor number can be anything within
{ a valid range. I'll use 0 as the minor number. To create the
.owner = THIS_MODULE, character device file, use the mknod utility. Please note that to
.open = null_open, create the device file you must have superuser privileges:
.release = null_release,
.read = null_read, [root]# mknod /dev/null_driver c 248 0
.write = null_write
}; Now it's time for the action. Let us send some data to the
pseudo device using the echo command and check the output
static int __init null_init(void) of the dmesg command:
{
major = register_chrdev(0, name, &null_ops); [root]# echo "Hello" > /dev/null_driver
if (major < 0) {
printk(KERN_INFO "Failed to register driver."); [root]# dmesg
return -1; Device registered successfully.
} Calling: null_open
Calling: null_write
printk(KERN_INFO "Device registered successfully.\n"); Calling: null_release
return 0;
} Yes! We got the expected output. When open, write, close
operations are performed on a device file, the appropriate is performed on a device file, the driver should transfer len bytes
functions from our driver's code get called. Let us perform the of data to the device and update the file offset off accordingly.
read operation and check the output of the dmesg command: Our null driver accepts input of any length; hence, return value is
always len, i.e., all bytes are written successfully.
[root]# cat /dev/null_driver In the next step we have initialised the file_operations
structure with the appropriate driver's function. In initialisation
[root]# dmesg function we have done a registration related job, and we are
Calling: null_open deregistering the character device in cleanup function.
Calling: null_read
Calling: null_release Implementation of the full pseudo driver
Let us implement one more pseudo device, namely, full. Any write
To make things simple I have used printk() statements in operation on this device fails and gives the ENOSPC error. This
every function. If we remove these statements, then /dev/null_ can be used to test how a program handles disk-full errors. Given
driver will behave exactly the same as the /dev/null pseudo below is the complete working code of the full driver:
device. Our code is working as expected. Let us understand
the details of our character driver. #include <linux/module.h>
First, take a look at the driver's function. Given below are the #include <linux/kernel.h>
prototypes of a few functions from the file_operations structure: #include <linux/fs.h>
#include <linux/kdev_t.h>
int (*open)(struct inode *i, struct file *f);
int (*release)(struct inode *i, struct file *f); static int major;
ssize_t (*read)(struct file *f, char __user *buf, size_t len, static char *name = "full_driver";
loff_t *off);
ssize_t (*write)(struct file *f, const char __user buf*, static int full_open(struct inode *i, struct file *f)
size_t len, loff_t *off); {
return 0;
The prototype of the open() and release() functions is }
exactly same. These functions accept two parametersthe first
is the pointer to the inode structure. All file-related information static int full_release(struct inode *i, struct file *f)
such as size, owner, access permissions of the file, file creation {
timestamps, number of hard-links, etc, is represented by the return 0;
inode structure. And each open file is represented internally by }
the file structure. The open() function is responsible for opening
the device and allocation of required resources. The release() static ssize_t full_read(struct file *f, char __user *buf,
function does exactly the reverse job, which closes the device size_t len, loff_t *off)
and deallocates the resources. {
As the name suggests, the read() function reads data from the return 0;
device and sends it to the application. The first parameter of this }
function is the pointer to the file structure. The second parameter
is the user-space buffer. The third parameter is the size, which static ssize_t full_write(struct file *f, const char __user
implies the number of bytes to be transferred to the user space *buf, size_t len, loff_t *off)
buffer. And, finally, the fourth parameter is the file offset which {
updates the current file position. Whenever the read() operation return -ENOSPC;
is performed on a device file, the driver should copy len bytes }
of data from the device to the user-space buffer buf and update
the file offset off accordingly. This function returns the number static struct file_operations full_ops =
of bytes read successfully. Our null driver doesn't read anything; {
that is why the return value is always zero, i.e., EOF. .owner = THIS_MODULE,
The driver's write() function accepts the data from the .open = full_open,
user-space application. The first parameter of this function is the .release = full_release,
pointer to the file structure. The second parameter is the user- .read = full_read,
space buffer, which holds the data received from the application. .write = full_write
The third parameter is len which is the size of the data. The };
fourth parameter is the file offset. Whenever the write() operation To be continued on page.... 55
N
owadays, every organisation wishes to have an online be called by the programmer depending upon the module and
presence for maximum visibility as well as reach. feature required in the application. As far as user-friendliness is
Industries from across different sectors have their concerned, the CMSs are very easy to use. CMS products can
own websites with detailed portfolios so that marketing as be used and deployed even by those who do not have very good
well as broadcasting can be integrated very effectively. programming skills.
Web 2.0 applications are quite popular in the global market. A framework can be considered as a model, a structure
With Web 2.0, the applications developed are fully dynamic or simply a programming template that provides classes,
so that the website can provide customised results or output to events and methods to develop an application. Generally,
the client. Traditionally, long term core coding, using different the software framework is a real or conceptual structure of
programming or scripting languages like CGI PERL, Python, software intended to serve as a support or guide to build
Java, PHP, ASP and many others, has been in vogue. But today something that expands the structure into something useful.
excellent applications can be developed within very little The software framework can be seen as a layered structure,
time. The major factor behind the implementation of RAD indicating which kind of programs can or should be built and
frameworks is re-usability. By making changes to the existing the way they interrelate.
code or by merely reusing the applications, development has
now become very fast and easy. Content Management Systems (CMSs)
The digital repositories and CMSs have a lot of feature-
Software frameworks overlap, but both systems are unique in terms of their
Software frameworks and content management systems underlying purposes and the functions they fulfill.
(CMS) are entirely different concepts. In the case of CMSs, the A CMS for developing Web applications is an integrated
reusable modules, plugins and related components are provided application that is used to create, deploy, manage and store
with the source code and all that is required is to only plug in or content on Web pages. The Web content includes plain or
plug out. The frameworks need to be installed and imported on formatted text, embedded graphics in multiple formats,
the host machine and then the functions are called. This means photos, video, audio as well as the code that can be third party
that the framework with different classes and functions needs to APIs for interaction with the user.
Laravel Prado
Phalcon Seagull
Symfony Yii
CodeIgniter CakePHP
Digital repositories
An institutional repository refers to the online archive or
library for collecting, preserving and disseminating digital
copies of the intellectual output of the institution, particularly
in the field of research. Figure 1: Joomla extensions
For any academic institution like a university, it also
includes digital content such as academic journal articles. It Sites offering government applications
covers both pre-prints and post-prints, articles undergoing Websites of small businesses and NGOs
peer review, as well as digital versions of theses and Community-based portals
dissertations. It even includes some other digital assets School and church websites
Personal or family home pages
PHP-based open source CMSs Joomlas user base includes:
The military - http://www.militaryadvice.org/
Joomla Typo3 US Army Corps of Engineers - Country: http://www.spl.
Drupal Mambo usace.army.mil/cms/index.php
WordPress
MTV Networks Quizilla (social networking) - http://www.
quizilla.com
generated in an institution such as administrative documents, New Hampshire National Guard - https://www.nh.ngb.
course notes or learning objectives. Depositing material in army.mil/
an institutional repository is sometimes mandated by some United Nations Regional Information Centre - http://www.
institutions. unric.org
IHOP (a restaurant chain) - http://www.ihop.com
Joomla CMS Harvard University - http://gsas.harvard.edu
Joomla is an award-winning open source CMS written in and many others
PHP. It enables the building of websites and powerful online The essential features of Joomla are:
applications. Many aspects, including its user-friendliness and User management
extensible nature, makes Joomla the most popular Web-based Media manager
software development CMS. Joomla is built on the model Language manager
viewcontroller (MVC) Web application framework, which Banner management
can be used independent of the CMS. Contact management
Joomla CMS can store data in a MySQL, MS SQL or Polls
PostgreSQL database, and includes features like page caching, Search
RSS feeds, printable versions of pages, news flashes, blogs, Web link management
polls, search and support for language internationalisation. Content management
According to reports by Market Wire, New York, as of Syndication and newsfeed management
February 2014, Joomla has been downloaded over 50 million Menu manager
times. Over 7,700 free and commercial extensions are available Template management
from the official Joomla Extension Directory and more are Integrated help system
available from other sources. It is supposedly the second most System features
used CMS on the Internet after WordPress. Many websites Web services
provide information on installing and maintaining Joomla sites. Powerful extensibility
Joomla is used across the globe to power websites of all
types and sizes: Joomla extensions
Corporate websites or portals Joomla extensions are used to extend the functionality of
Corporate intranets and extranets Joomla-based Web applications. The Joomla extensions for
Online magazines, newspapers and publications multiple categories and services can be downloaded from
E-commerce and online reservation sites http://extensions.joomla.org.
Database Configuration
Joomla! is free software released under the GNU General Public License.
Database Type* MySQLi
T
his article goes deep into what really goes on inside Jumper (female-to-female)
an OS while managing and controlling the hardware. SD card (with bootable Raspbian image)
The OS hides all the complexities, carries out all the Here's a quick overview of what device drivers are. As the
operations and gives end users their requirements through the name suggests, they are pieces of code that drive your device.
UI (User Interface). GPIO can be considered as the simplest One can even consider them a part of the OS (in this case,
of all the peripherals to work on any board. A small GPIO Linux) or a mediator between your hardware and UI.
driver would be the best medium to explain what goes on A basic understanding of how device drivers actually
under the hood. work is required; so do learn more about that in case you need
A good embedded systems engineer should, at the very to. Lets move forward to the GPIO driver assuming that one
least, be well versed in the C language. Even if the following knows the basics of device drivers (like inserting/removing
demonstration can't be replicated (due to the unavailability of the driver from the kernel, probe functionality, etc).
hardware or software resources), a careful read through this When you insert (insmod) this driver, it will register itself
article will give readers an idea of the underlying processes. as a platform driver with the OS. The platform device is also
registered in the same driver. Contrary to this, registering
Prerequisites to perform this experiment the platform device in the board file is a good practice. A
C language (high priority) peripheral can be termed a platform device if it is a part of
Raspberry Pi board (any model) the SoC (system on chip). Once the driver is inserted, the
BCM2835-ARM-peripherals datasheet (just Google for it!) registration (platform device and platform driver) takes place,
...ApacheWebServer1... ...ApacheWebServer2...
WEB SERVER 1
USER 1,
USER2, 192.168.10.31
Figure 3: Custom web page of Figure 4: Custom web page of
USER3 ...etc
Apache Web Server1 Apache Web Server2
HTTP TRAFFIC
Pound server
192.168.10.30
Installation and configuration of Pound
gateway server
WEB SERVER 2
First, ensure YUM is up and running:
192.168.10.32
Now, download the epel-release-6-8.noarch.rpm package added repo files. No changes are made in epel.repo and
and install it. epel-testing.repo. Move the default redhat.repo and rhel-
source.repo to the backup location. Now, connect the server
Important notes on EPEL to the Internet and, using the yum utility, install Pound:
1. EPEL stands for Extra Packages for Enterprise Linux.
2. EPEL is not a part of RHEL but provides a lot of open [root@PoundGateway ~]# yum install Pound*
source packages for major Linux distributions.
3. EPEL packages are maintained by the Fedora team and This will install Pound, Pound-debuginfo and will also
are fully open source, with no core duplicate packages and install required dependencies along with it.
no compatibility issues. They are to be installed using the To verify Pounds installation, type:
YUM utility.
The link to download the EPEL release for RHEL 6 (32-bit) [root@PoundGateway ~]# rpm -qa Pound
is: http://download.fedoraproject.org/pub/epel/6/i386/epel- Pound-2.6-2.el6.i686
release-6-8.noarch.rpm [root@PoundGateway ~]#
And for 64 bit, it is:
http://download.fedoraproject.org/pub/epel/6/x86_64/epel- The location of the Pound configuration file is /etc/
release-6-8.noarch.rpm pound.cfg
Here, epel-release-6-8.noarch.rpm is kept at /opt: You can view the default Pound configuration file by
Go to the /opt directory and change the permission of using the command given below:
the files:
[root@PoundGateway ~]# cat /etc/pound.cfg
[root@poundgateway opt]# chmod -R 755 epel-release-6-8.noarch.
rpm Make the changes to the Pound configuration file as shown
[root@poundgateway opt]# in the code snippet given below:
We will comment the section related to ListenHTTPS
Now, install epel-release-6-8.noarch.rpm: as we do not need HTTPS for now.
Add the IP address 192.168.10.30 under the
[root@poundgateway opt]# rpm -ivh --aid --force epel- ListenHTTP section.
release-6-8.noarch.rpm A dd the IP address 192.168.10.31 and 192.168.10.32
warning: epel-release-6-8.noarch.rpm: Header V3 RSA/SHA256 with Port 80 under Service Backend Section, where
Signature, key ID 0608b895: NOKEY [192.168.10.30] is for the Pound server; [192.168.10.31]
Preparing... ################################### for Web Server1 and [192.168.10.32 ] for Web Server2.
######## [100%] The edited Pound configuration file is:
1:epel-release ###################################
######## [100%] [root@PoundGateway ~]# cat /etc/pound.cfg
[root@poundgateway opt]# #
# Default pound.cfg
epel-release-6-8.noarch.rpm installs the repo files necessary #
to download the Pound package: # Pound listens on port 80 for HTTP and port 443 for HTTPS
# and distributes requests to 2 backends running on
[root@poundgateway ~]# cd /etc/yum.repos.d/ localhost.
[root@poundgateway yum.repos.d]# ll # see pound(8) for configuration directives.
total 16 # You can enable/disable backends with poundctl(8).
-rw-r--r-- 1 root root 957 Nov 4 2012 epel. #
repo
-rw-r--r-- 1 root root 1056 Nov 4 2012 epel- User "pound"
testing.repo Group "pound"
-rw-r--r-- 1 root root 67 Jul 27 13:30 redhat.repo Control "/var/lib/pound/pound.cfg"
-rw-r--r--. 1 root root 529 Apr 27 2011 rhel-
source.repo ListenHTTP
[root@poundgateway yum.repos.d]# Address 192.168.10.30
Port 80
As observed, epel.repo and epel-testing.repo are the new End
None
OSFY?
W
ikipedia defines a bounce email as a system- envelope sender, set by the local MTA running there
generated failed delivery status notification (mta.example.com), or by the PHP script to alice@
(DSN) or a non-delivery report (NDR), which example.com. Now, mta.example.com looks up the
informs the original sender about a delivery problem. When DNS mx records for somewhere.com, chooses a host
that happens, the original email is said to have bounced. from that list, gets its IP address and tries to connect
Broadly, bounces are categorised into two types: to the MTA running on somewhere.com, port 25 via
A hard/permanent bounce: This indicates that there an SMTP connection. Now, the MTA of somewhere.
exists a permanent reason for the email not to get com is in trouble as it can't find a user receiver in its
delivered. These are valid bounces, and can be due to the local user table. The mta.somewhere.com responds to
non-existence of the email address, an invalid domain example.com with an SMTP failure code, stating that
name (DNS lookup failure), or the email provider the user lookup failed (Code: 550). Its time for mta.
blacklisting the sender/recipient email address. example.com to generate a bounce email to the address
A soft/temporary bounce: This can occur due to of the return-path email header (the envelope sender),
various reasons at the sender or recipient level. It with a message that the email to alice@somewhere.
can evolve due to a network failure, the recipient com failed. That's a bounce email. Properly maintained
mailbox being full (quota-exceeded), the recipient mailing lists will have every email passing through
having turned on a vacation reply, the local Message them branded with the generic email ID, say mails@
Transfer Agent (MTA) not responding or being badly somewhere.com as the envelope sender, and bounces to
configured, and a whole lot of other reasons. Such that will be wasted if left unhandled.
bounces-{ prefix }-{hash(prefix,secretkey) }@sender_domain bounces-{ to_address }-{ delivery_timestamp }-{ encode ( to_
address-delivery & timestamp ), secretKey }@somewhere.com
PHP has its own hash-generating functions that can make
things easier. Since PHPs hmacs cannot be decoded, but only The current timestamp can be generated in PHP by:
compared, the idea will be to adjust the recipient email ID in
the prefix part of the VERP address along with its hash. On $current_timestamp = time();
receipt, the prefix and the hash can be compared to validate
the integrity of the bounce. Theres still work to do before the email is sent, as the
We will string replace the @ in the recipient email ID to local MTA at example.com may try to set its own custom
return-path for messages it transmits. In the example below, path to the To: header of bounce
we adjust the exim configuration on the MTA to override The email can be made use of in the handleBounce.php by
this behaviour. using a simple POST request.
// This will help us cut the address part with the Simple tests to differentiate between a
symbol - permanent and temporary bounce
$hashedAddressPart = explode( '-', $hashedSlice1] ); One of the greatest challenges while writing a bounce
processor is to make sure it handles only the right bounces or
// Now we have the prefix in the permanent ones. A bounce processing script that reacts to
$hashedAddressPart[ 0 - 2 ] and the hash in every single bounce can lead to mass unsubscription of users
$hashedAddressPart[3] from the mailing list and a lot of havoc. Exim helps us here in
$verpPrefix = $hashedAddressPart [0]. '-'. a great way by including an additional X-Failed-Recipients:
$hashedAddressPart 1]. '-'. hashedAddressPart [2]; header to a permanent bounce email. This key can be checked
for in the regex function we wrote earlier, and action can be
// Extracting the bounce time. taken only if it exists.
$bounceTime = $hashedAddressPart[ 2 ];
/**
// Valid time for a bounce to happen. The values can be * Check if the bounce corresponds to a permanent failure
subtracted to find out the time in between and even used to * can be added to the extractHeaders() function above
set an accept time, say 3 days. */
if ( $bounceTime < $timeNow ) { function isPermanentFailure( $email ) {
if ( hash_hmac( $hashAlgorithm, $verpPrefix , $lineBreaks = explode( "\n", $email );
$hashSecretKey ) === $hashedAddressPart[3] ) { foreach ( $lineBreaks as $lineBreak ) {
// Bounce is valid, as if ( preg_match( "/^X-Failed-Recipients: (.*)/", $lineBreak,
the comparisons return true. $permanentFailMatch ) ) {
$to = string_replace( $bounceHeaders[ 'x-failed-recipients' ] =
., @, $verpPrefix[1] ); $permanentFailMatch;
return $to; return true;
} }
} }
} Even today, we have a number of large organisations
that send more than 100 emails every minute and still
Taking action on the failing recipient have all bounces directed to /dev/null. This results in far
Now that you have got the failing recipient, the task would too many emails being sent to undeliverable addresses
be to record his bounce history and take relevant action. and eventually leads to frequent blacklisting of the
A recommended approach would be to maintain a bounce organisations mail server by popular providers like
records table in the database, which would store the failed Gmail, Hotmail, etc.
recipient, bounce-timestamp and failure reason. This can be If bounces are directed to an IMAP maildir, the regex
inserted into the database on every bounce processed, and can functions won't be necessary, as the PHP IMAP library can
be as simple as: parse the headers readily for you.
Full Page Test DNS Health Ping and Traccroute Sign up [root@bookingwire sridhar]# sudo apt-get install varnish
Pingdom Website Speed Test
Test Now
To verify if the new repositories have been added to the Basic Varnish configuration
repo list, run the following command and check the output to The Varnish configuration file is located in /etc/sysconfig/
see if the repository has been added: varnish for Centos and /etc/default/varnish for Ubuntu.
Open the file in your terminal using the nano or vim text
[root@bookingwire sridhar]#yum repolist editors. Varnish provides us three ways of configuring it. We
prefer Option 3. So for our 2GB server, the configuration
If you happen to use an Ubuntu VPS, then you should use steps are as shown below (the lines with comments have been
the following commands to enable the repositories: stripped off for the sake of clarity):
PORT} \ of the servers fails, then all requests should be routed to the
-f ${VARNISH_VCL_CONF} \ healthy server. To do this, add the following to your default.
-T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ vcl file:
ADMIN_LISTEN_PORT} \
-t ${VARNISH_TTL} \ backend bw1 { .host = 146.185.129.131;
-w ${VARNISH_MIN_THREADS},${VARNISH_MAX_ .probe = { .url = /google0ccdbf1e9571f6ef.
THREADS},${VARNISH_THREAD_TIMEOUT} \ html;
-u varnish -g varnish \ .interval = 5s;
-p thread_pool_add_delay=2 \ .timeout = 1s;
-p thread_pools=2 \ .window = 5;
-p thread_pool_min=400 \ .threshold = 3; }}
-p thread_pool_max=4000 \ backend bw2 { .host = 37.139.24.12;
-p session_linger=50 \ .probe = { .url = /google0ccdbf1e9571f6ef.
-p sess_workspace=262144 \ html;
-S ${VARNISH_SECRET_FILE} \ .interval = 5s;
-s ${VARNISH_STORAGE} .timeout = 1s;
.window = 5;
The first line when substituted with the variables will read -a .threshold = 3; }}
:80,:443 and instruct Varnish to serve all requests made on Ports backend bw1ssl { .host = 146.185.129.131;
80 and 443. We want Varnish to serve all http and https requests. .port = 443;
To set the thread pools, first determine the number .probe = { .url = /google0ccdbf1e9571f6ef.
of CPU cores that your VPS uses and then update the html;
directives. .interval = 5s;
.timeout = 1s;
[root@bookingwire sridhar]# grep processor /proc/cpuinfo .window = 5;
processor : 0 .threshold = 3; }}
processor : 1 backend bw2ssl { .host = 37.139.24.12;
.port = 443;
This means you have two cores. .probe = { .url = /google0ccdbf1e9571f6ef.
The formula to use is: html;
.interval = 5s;
-p thread_pools=<Number of CPU cores> \ .timeout = 1s;
-p thread_pool_min=<800 / Number of CPU cores> \ .window = 5;
.threshold = 3; }}
The -s ${VARNISH_STORAGE} translates to -s director default_director round-robin {
malloc,1G after variable substitution and is the most { .backend = bw1; }
important directive. This allocates 1GB of RAM for { .backend = bw2; }
exclusive use by Varnish. You could also specify -s file,/ }
var/lib/varnish/varnish_storage.bin,10G which tells
Varnish to use the file caching mechanism on the disk and director ssl_director round-robin {
that 10GB has been allocated to it. Our suggestion is that { .backend = bw1ssl; }
you should use the RAM. { .backend = bw2ssl; }
}
Configure the default.vcl file
The default.vcl file is where you will have to make most of sub vcl_recv {
the configuration changes in order to tell Varnish about your if (server.port == 443) {
Web servers, assets that shouldnt be cached, etc. Open the set req.backend = ssl_director;
default.vcl file in your favourite editor: }
else {
[root@bookingwire sridhar]# nano /etc/varnish/default.vcl set req.backend = default_director;
}
Since we expect to have two NGINX servers running }
our application, we want Varnish to distribute the http
requests between these two servers. If, for any reason, one You might have noticed that we have used public IP
addresses since we had not enabled private networking If you dont handle this, Varnish will cache the same page
within our servers. You should define the backends one once each, for each type of encoding, thus wasting server
each for the type of traffic you want to handle. Hence, we resources. In our case, it would gobble up memory. So add the
have one set to handle http requests and another to handle following commands to the vcl_recv to have Varnish cache
the https requests. the content only once:
Its a good practice to perform a health check to see
if the NGINX Web servers are up. In our case, we kept it if (req.http.Accept-Encoding) {
simple by checking if the Google webmaster file was present if (req.http.Accept-Encoding ~ gzip) {
in the document root. If it isnt present, then Varnish will not # If the browser supports it, well use gzip.
include the Web server in the round robin league and wont set req.http.Accept-Encoding = gzip;
redirect traffic to it. }
else if (req.http.Accept-Encoding ~ deflate) {
.probe = { .url = /google0ccdbf1e9571f6ef.html; # Next, try deflate if it is supported.
set req.http.Accept-Encoding = deflate;
The above command checks the existence of this file at }
each backend. You can use this to take an NGINX server out else {
intentionally either to update the version of the application or # Unknown algorithm. Remove it and send unencoded.
to run scheduled maintenance checks. All you have to do is to unset req.http.Accept-Encoding;
rename this file so that the check fails! }
In spite of our best efforts to keep our servers sterile, }
there are a number of reasons that can cause a server to
go down. Two weeks back, we had one of our servers go Now, restart Varnish.
down, taking more than a dozen sites with it because the
master boot record of Centos was corrupted. In such cases, [root@bookingwire sridhar]# service varnish restart
Varnish can handle the incoming requests even if your Web
server is down. The NGINX Web server sets an expires
header (HTTP 1.0) and the max-age (HTTP 1.1) for each Additional configuration for content management
page that it serves. If set, the max-age takes precedence systems, especially Drupal
over the expires header. Varnish is designed to request A CMS like Drupal throws up additional challenges when
the backend Web servers for new content every time the configuring the VCL file. Well need to include additional
content in its cache goes stale. However, in a scenario directives to handle the various quirks. You can modify the
like the one we faced, its impossible for Varnish to obtain directives below to suit the CMS that you are using. When
fresh content. In this case, setting the Grace in the using CMSs like Drupal if there are files that you dont want
configuration file allows Varnish to serve content (stale) cached for some reason, add the following commands to your
even if the Web server is down. To have Varnish serve the default.vcl file in the vcl_recv section:
(stale) content, add the following lines to your default.vcl:
if (req.url ~ ^/status\.php$ ||
sub vcl_recv { req.url ~ ^/update\.php$ ||
set req.grace = 6h; req.url ~ ^/ooyala/ping$ ||
} req.url ~ ^/admin/build/features ||
req.url ~ ^/info/.*$ ||
sub vcl_fetch { req.url ~ ^/flag/.*$ ||
set beresp.grace = 6h; req.url ~ ^.*/ajax/.*$ ||
} req.url ~ ^.*/ahah/.*$) {
return (pass);
if (!req.backend.healthy) { }
unset req.http.Cookie;
} Varnish sends the length of the content (see the
Varnish log output above) so that browsers can display
The last segment tells Varnish to strip all cookies for an the progress bar. However, in some cases when Varnish
authenticated user and serve an anonymous version of the is unable to tell the browser the specified content-length
page if all the NGINX backends are down. (like streaming audio) you will have to pass the request
Most browsers support encoding but report it differently. directly to the Web server. To do this, add the following
NGINX sets the encoding as Vary: Cookie, Accept-Encoding. command to your default.vcl:
if (req.url ~ ^/content/music/$) { you should track down the cookie and update the regex
return (pipe); above to strip it.
} Once you have done that, head to /admin/config/
development/performance, enable the Page Cache setting
Drupal has certain files that shouldnt be accessible to and set a non-zero time for Expiration of cached pages.
the outside world, e.g., Cron.php or Install.php. However, Then update the settings.php with the following snippet
you should be able to access these files from a set of IPs by replacing the IP address with that of your machine running
that your development team uses. At the top of default.vcl Varnish.
include the following by replacing the IP address block with
that of your own: $conf[reverse_proxy] = TRUE;
$conf[reverse_proxy_addresses] = array(37.139.8.42);
acl internal { $conf[page_cache_invoke_hooks] = FALSE;
192.168.1.38/46; $conf[cache] = 1;
} $conf[cache_lifetime] = 0;
$conf[page_cache_maximum_age] = 21600;
Now to prevent the outside world from accessing these
pages well throw an error. So inside of the vcl_recv function You can install the Drupal varnish module (http://www.
include the following: drupal.org/project/varnish), which provides better integration
with Varnish and include the following lines in your settings.php:
if (req.url ~ ^/(cron|install)\.php$ && !client.ip ~
internal) { $conf[cache_backends] = array(sites/all/modules/varnish/
error 404 Page not found.; varnish.cache.inc);
} $conf[cache_class_cache_page] = VarnishCache;
If you prefer to redirect to an error page, then use this Checking if Varnish is running and
instead: serving requests
Instead of logging to a normal log file, Varnish logs to a
if (req.url ~ ^/(cron|install)\.php$ && !client.ip ~ shared memory segment. Run varnishlog from the command
internal) { line, access your IP address/ URL from the browser and
set req.url = /404; view the Varnish messages. It is not uncommon to see a 503
} service unavailable message. This means that Varnish is
unable to connect to NGINX. In which case, you will see an
Our approach is to cache all assets like images, JavaScript error line in the log (only the relevant portion of the log is
and CSS for both anonymous and authenticated users. So reproduced for clarity).
include this snippet inside vcl_recv to unset the cookie set by
Drupal for these assets: [root@bookingwire sridhar]# Varnishlog
Figure 4: Pingdom test result after configuring Varnish [root@bookingwire sridhar]# nginx -V
I
magine an old Hindi movie where the villain and his consequences of the lie. It thus becomes vitally important
subordinate are conversing over the telephone, and the for the state to use all of its powers to repress dissent, for the
hero intercepts this call to listen in on their conversation truth is the mortal enemy of the lie, and thus by extension, the
a perfect man in the middle (MITM) scenario. Now truth is the greatest enemy of the state.
extend this to the network, where an attacker intercepts So let us interpret this quote by a leader of the
communication between two computers. infamous Nazi regime from the perspective of the ARP
Here are two possibilities with respect to what an attacker protocol: If you repeatedly tell a device who a particular
can do to intercepted traffic: MAC address belongs to, the device will eventually
1. Passive attacks (also called eavesdropping or only believe you, even if this is not true. Further, the device
listening to the traffic): These can reveal sensitive will remember this MAC address only as long as you keep
information such as clear text (unencrypted) login IDs and telling the device about it. Thus, not securing an ARP
passwords. cache is dangerous to network security.
2. Active attacks: These modify the traffic and can be used
for various types of attacks such as replay, spoofing, etc. Note: From the network security professionals
An MITM attack can be launched against cryptographic view, it becomes absolutely necessary to monitor ARP
systems, networks, etc. In this article, we will limit our traffic continuously and limit it to below a threshold. Many
discussions to MITM attacks that use ARP spoofing. managed switches and routers can be configured to monitor
and control ARP traffic below a threshold.
ARP spoofing
Joseph Goebbels, Nazi Germanys minister for propaganda, An MITM attack is easy to understand using this
famously said, If you tell a lie big enough and keep repeating context. Attackers trying to listen to traffic between any two
it, people will eventually come to believe it. The lie can devices, say a victims computer system and a router, will
be maintained only for such time as the state can shield launch an ARP spoofing attack by sending unsolicited (what
the people from the political, economic and/or military this means is an ARP reply packet sent out without receiving
The tool has command line options, but its GUI is easier
and can be started by using:
ettercap -G
This article, the first of a multi-part series, familiarises readers with Asterisk, which is a
software implementation of a private branch exchange (PBX).
A
sterisk is a revolutionary open source platform his requirements. Later, he published the software as open
started by Mark Spencer, and has shaken up the source and a lot of others joined the community to further
telecom world. This series is meant to familiarise develop the software. The rest is history.
you with it, and educate you enough to be a part of it in
order to enjoy its many benefits. The statistics
If you are a technology freak, you will be able to make Today, Asterisk claims to have 2 million downloads
your own PBX for your office or home after going through every year, and is running on over 1 million servers,
this series. As a middle level manager, you will be able to with 1.3 million new endpoints created annually. A
guide a techie to do the job, while senior level managers with 2012 statistic by Eastern Management claims that 18
a good appreciation of the technology and minimal costs per cent of all PBX lines in North America are open
involved would be in a position to direct somebody to set up source-based and the majority of them are on Asterisk.
an Asterisk PBX. If you are an entrepreneur, you can adopt Indian companies have also started adopting Asterisk
one of the many business models with Asterisk. As you will since a few years. The initial thrust was for international
see, it is worthwhile to at least evaluate the option. call centres. A large majority of the smaller call centres
(50-100 seater) use Vicidial', another open source
History application based on Asterisk. IP PBX penetration in the
In 1999, Mark Spencer of Digium fame started a Linux Indian marketis not very high due to certain regulatory
technical support company with US$ 4000. Initially, he had misinterpretations. Anyhow, this unclear environment is
to be very frugal; so buying one of those expensive PBXs gradually getting clarity, and very soon, we will see an
was unthinkable. Instead, he started programming a PBX for astronomic growth of Asterisk in the Indian market.
The call centre boom also led anywhere in the officeparticipate in a conference, visit
to the development of the Asterisk a colleague, doctors can visit their in-patientsand yet
ecosystem comprising Asterisk- receive calls as if they were seated at their desks.
based product companies, software External extensions: The employees could be at home,
supporters, hardware resellers, etc, at a friend's house, or even out making a purchase, and
across India. This presents a huge still receive the same calls, as if at their desks.
opportunity for entrepreneurs. Increased call accountability: Calls can be recorded and
monitored for quality or security purposes at the PBX.
Some terminology Lower telephone costs: The volume of calls passing
Before starting, I would like to Mark Spencer, through the PBX makes it possible to negotiate with the
introduce some basic terms for the founder of Asterisk service provider for better rates.
benefit of readers who are novices in this field. Let us start The advantages that a roaming extension brings
with the PBX or private branch exchange, which is the are many, which we will explore in more detail in
heart of all corporate communication. All the telephones subsequent editions.
seen in an office environment are connected to the PBX, Let us look into the basics of Asterisk. Asterisk is
which in turn connects you to the outside world. The like a box of Lego blocks for people who want to create
internal telephones are called subscribers and the external communications applications. It includes all the building
lines are called trunk lines. blocks needed to create a PBX, an IVR system, a conference
The trunk lines connect the PBX to the outside world bridge and virtually any other communications app you can
or the PSTN (Public Switched Telephony Network). imagine, says an excerpt from asterisk.org.
Analogue trunks (FXOForeign eXchange Office) are Asterisk is actually a piece of software. In very simple
based on very old analogue technology, which is still and generic terms, the following are the steps required to
in use in our homes and in some companies. Digital create an application based on it:
trunk technology or ISDN (Integrated Services Digital 1. Procure standard hardware.
Network) evolved in the 80s with mainly two types of 2. Install Linux.
connections BRI (Basic Rate Interface) for SOHO 3. Download Asterisk software.
(small office/ home office) use, and PRI (Primary Rate 4. Install Asterisk.
Interface) for corporate use. In India, analogue trunks 5. Configure it.
are used for SOHO trunking, but BRI is no longer used 6. Procure hardware interfaces for the trunk line and
at all. Anyhow, PRI is quite popular among companies. configure them.
IP/SIP (Internet Protocol/Session Initiation Protocol) 7. Procure hardware for subscribers and configure them.
trunking has been used by international call centres for 8. Youre then ready to make your calls.
quite some time. Now, many private providers like Tata Procure a standard desktop or server hardware, based
Telecom have started offering SIP trunking for domestic on Pentium, Xeon, i3, etc. RAM is an important factor, and
calls also. The option of GSM trunking through a GSM could be 2GB, 4GB or 8GB. These two factors decide the
gateway using SIM cards is also quite popular, due to the number of concurrent calls. Hard disk capacity of 500GB or
flexibility offered in costs, prepaid options and network 1TB is mainly for space to store voice files for VoiceMail
availability. or VoiceLogger. The hard disks speed also influences the
The users connected to the PBX are called subscribers. concurrent calls.
Analogue telephones (FXS Foreign eXchange Subscriber) The next step is to choose a suitable OSFedora,
are still very commonly used and are the cheapest. As Debian, CentOS or Ubuntu are well suited for this
Asterisk is an IP PBX, we need a VoIP FXS gateway to purpose. After this, Asterisk software may be downloaded
convert the IP signals to analogue signals. Asterisk supports from www.asterisk.org/downloads/. Either the newest LTS
IP telephones, mainly using SIP. (Long Term Support) release or the latest standard version
Nowadays, Wi-Fi clients are available even for can be downloaded. LTS versions are released once in
smartphones, which enable the latter to work like extensions. four years. They are more stable, but have fewer features
These clients bring in a revolutionary transformation to the than the standard version, which is released once a year.
telephony landscapeanalogous to paperless offices and Once the software is downloaded, the installation may be
telephone-less desks. The same smartphone used to make carried out as per the instructions provided. We'll go into
calls over GSM networks becomes a dual-purpose phone the details of the installation in later sessions.
also working like a desk extension. Just for a minute, consider The download page also offers the option to download
the limitless possibilities enabled by this new transformed AsteriskNow, which is an ISO image of Linux, Asterisk and
extension phone. FreePBX GUI. If you prefer a very quick and simple installation
Extension roaming: Employees can roam about without much flexibility, you may choose this variant.
After the installation, one needs to create the trunks, users There are also lots of applications based on Asterisk
and set up some more features to be able to start using the like Vicidial, which is a call-centre suite for inbound
system. The administrators can make these configurations and outbound dialling. For the latter, one can configure
directly into the dial plan, or there are GUIs like FreePBX, campaigns with lists of numbers, dial these numbers
which enable easy administration. in predictive dialling mode and connect to the agents.
Depending on the type of trunk chosen, we need to Similarly, inbound dialling can also be configured with
procure hardware. If we are connecting a normal analogue multiple agents, and the calls routed based on multiple
line, an FXO card with one port needs to be procured, in PCI criteria like the region, skills, etc.
or PCIe format, depending on the slots available on the server. Asterisk also easily integrates with multiple enterprise
After inserting the card, it has to be configured. Similarly, if applications (like CRM and ERP) over CTI (computer
you have to connect analogue phones, you need to procure telephony interfaces) like TAPI (Telephony API) or by using
FXS gateways. IP phones can be directly connected to the simple URL integration.
system over the LAN. O'Reilly has a book titled Asterisk: The future of
Exploring the PBX further, you will be astonished telephony', which can be downloaded. I would like to take
by the power of Asterisk. It comes with a built in voice you through the power of Asterisk in subsequent issues, so
logger, which can be customised to record either all calls that you and your network can benefit from this remarkable
or those from selective people. In most proprietary PBXs, product, which is expected to change the telephony landscape
this would have been an additional component. Asterisk not of the future.
only provides a voice mail box, but also has the option to
convert the voice mail to an attachment that can be sent to
you as an email. The Asterisk IVR is very powerful; it has By: Devasia Kurian
multiple levels, digit collection, database and Web-service The author is the founder and CEO of *astTECS.
integration, and speech recognition.
This DIY article is for systems admins and software hobbyists, and teaches them
how to create a bootable USB that is loaded with multiple ISOs.
S
ystems administrators and other Linux enthusiasts use Fat32 (0x0c). You can choose ext2/ext3 file systems also, but
multiple CDs or DVDs to boot and install operating they will not load some OSs. So, Fat32 is the best choice for
systems on their PCs. But it is somewhat difficult and most of the ISOs.
costly to maintain one CD or DVD for each OS (ISO image Now download the grub4dos-0.4.5c (not grub4dos-
file) and to carry around all these optical disks; so, lets look 0.4.6a) from https://code.google.com/p/grub4dos-chenall/
at the alternativea multi-boot USB. downloads/list and extract it on the desktop.
The Internet provides so many ways (in Windows and Next, install the grub4dos on the MBR with a zero
in Linux) to convert a USB drive into a bootable USB. In second time-out on your USB stick, by typing the following
real time, one can create a bootable USB that contains a command at the terminal:
single OS. So, if you want to change the OS (ISO image),
you have to format the USB. To avoid formatting the USB sudo ~/Desktop/grub4dos-0.4.5c/bootlace.com - -time-out =0 /
each time the ISO is changed, use Easy2Boot. In my case, dev/sdb
the RMPrepUSB website saved me from unnecessarily
formatting the USB drive by introducing the Easy2Boot
option. Easy2Boot is open source - it consists of plain text Note: You can change the path to your grub4dos folder.
batch files and open source grub4dos utilities. It has no sdb is your USB and can be checked by the df command in a
proprietary software. terminal or by using the gparted or disk utility tools.
Y
ou may have heard of the many embedded target to working with most target boards, you can apply these
boards available today, like the BeagleBoard, techniques on other boards too.
Raspberry Pi, BeagleBone, PandaBoard, Cubieboard,
Wandboard, etc. But once you decide to start development for Device tree
them, the right hardware with all the peripherals may not be Flattened Device Tree (FDT) is a data structure that describes
available. The solution to starting development on embedded hardware initiatives from open firmware. The device tree
Linux for ARM is by emulating hardware with QEMU, which perspective kernel no longer contains the hardware description,
can be done easily without the need for any hardware. There which is located in a separate binary called the device tree
are no risks involved, too. blob (dtb) file. So, one compiled kernel can support various
QEMU is an open source emulator that can emulate hardware configurations within a wider architecture family.
the execution of a whole machine with a full-fledged OS For example, the same kernel built for the OMAP family can
running. QEMU supports various architectures, CPUs and work with various targets like the BeagleBoard, BeagleBone,
target boards. To start with, lets emulate the Versatile Express PandaBoard, etc, with dtb files. The boot loader should be
Board as a reference, since it is simple and well supported by customised to support this as two binaries-kernel image and
recent kernel versions. This board comes with the Cortex-A9 the dtb file - are to be loaded in memory. The boot loader
(ARMv7) based CPU. passes hardware descriptions to the kernel in the form of dtb
In this article, I would like to mention the process of files. Recent kernel versions come with a built-in device tree
cross compiling the Linux kernel for ARM architecture compiler, which can generate all dtb files related to the selected
with device tree support. It is focused on covering the architecture family from device tree source (dts) files. Using the
entire process of workingfrom boot loader to file system device tree for ARM has become mandatory for all new SOCs,
with SD card support. As this process is almost similar with support from recent kernel versions.
Figure 2: Kernel configurationRAM disk support qemu-system-arm -M vexpress-a9 -m 1024 -serial stdio \
-kernel /mnt/sdcard/zImage \
and space bar to select among various states (blank, m or *) -dtb /mnt/sdcard/vexpress-v2p-ca9.dtb \
4) Make sure devtmpfs is enabled under the Device Drivers -sd /mnt/sdcard/rootfs.img -append root=/dev/mmcblk0
and Generic Driver options. console=ttyAMA0
Now, lets go ahead with building the kernel, as follows:
In case the sdcard/image file holds a valid partition table, we
#generate kernel image as zImage and necessary dtb files need to refer to the individual partitions like /dev/mmcblk0p1, /dev/
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- mmcblk0p2, etc. Since the current image file is not partitioned, we
zImage dtbs can refer to it by the device file name /dev/mmcblk0.
#transform zImage to use with u-boot
make ARCH=arm CROSS_COMPILE=arm-linux- Building u-boot
gnueabihf- uImage \ Switch back to the u-boot directory (u-boot-2014.04), build
LOADADDR=0x60008000 u-boot as follows and copy it to the SD card:
#copy necessary files to sdcard
cp arch/arm/boot/zImage /mnt/sdcard make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- vexpress_
cp arch/arm/boot/uImage /mnt/sdcard ca9x4_config
cp arch/arm/boot/dts/*.dtb /mnt/sdcard make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf-
#Build dynamic modules and copy to suitable destination cp u-boot /mnt/image
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- modules # you can go for a quick test of generated u-boot as follows
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- qemu-system-arm -M vexpress-a9 -kernel /mnt/sdcard/u-boot
modules_install \ INSTALL_ -serial stdio
MODPATH=<mount point of rootfs>
Lets ignore errors such as u-boot couldn't locate kernel
You may skip the last two steps for the moment, as the image or any other suitable files.
given configuration steps avoid dynamic modules. All the
necessary modules are configured as static. The final steps
Lets boot the system with u-boot using an image file such as
Getting rootfs SD card, and make sure the QEMU PATH is not disturbed.
We require a file system to work with the kernel weve built. Unmount the SD card image and then boot using QEMU.
Download the pre-built rootfs image to test with QEMU
from the following link: http://downloads.yoctoproject.org/ umount /mnt/sdcard
releases/yocto/yocto-1.5.2/machines/qemu/qemuarm/core-
image-minimal-qemuarm.ext3 and copy it to the SD card (/
mnt/image) by renaming it as rootfs.img for easy usage. You
may obtain the rootfs image from some other repository or
build it from sources using Busybox.
As the Internet of
Things becomes
more of a reality,
Contiki, an open
source OS, allows
DIY enthusiasts
Contiki OS trollers
to experiment
with connecting
to the I n te microcontrollers to
the Internet.
C
ontiki is an open source operating system for Step 3: Open the Virtual Machine and open the Contiki
connecting tiny, low-cost, low-power microcontrollers OS; then wait till the login screen appears.
to the Internet. It is preferred because it supports Step 4: Input the password as user; this shows the
various Internet standards, rapid development, a selection desktop of Ubuntu (Contiki).
of hardware, has an active community to help, and has
commercial support bundled with an open source licence. Running the simulation
Contiki is designed for tiny devices and thus the memory To run a simulation, Contiki comes with many prebuilt
footprint is far less when compared with other systems. modules that can be readily run on the Cooja simulator or on
It supports full TCP with IPv6, and the devices power the real hardware platform. There are two methods of opening
management is handled by the OS. All the modules of Contiki the Cooja simulator window.
are loaded and unloaded during run time; it implements Method 1: In the desktop, as shown in Figure 1, double
protothreads, uses a lightweight file system, and various click the Cooja icon. It will compile the binaries for the first
hardware platforms with sleepy routers (routers which sleep time and open the simulation windows.
between message relays). Method 2: Open the terminal and go to the Cooja directory:
One important feature of Contiki is its use of the Cooja
simulator for emulation in case any of the hardware devices pradeep@localhost$] cd contiki/tools/cooja
are not available. pradeep@localhost$] ant run
Installation of Contiki You can see the simulation window as shown in Figure 2.
Contiki can be downloaded as Instant Contiki, which is
available in a single download that contains an entire Contiki Creating a new simulation
development environment. It is an Ubuntu Linux virtual To create a simulation in Contiki, go to File menu New
machine that runs in VMware Player, and has Contiki and Simulation and name it as shown in Figure 3.
all the development tools, compilers and simulators used in Select any one radio medium (in this case) -> Unit Disk
Contiki development already installed. Most users prefer Graph Medium (UDGM): Distance Loss and click Create.
Instant Contiki over the source code binaries. The current Figure 4 shows the simulation window, which has the
version of Contiki (at the time of writing this post) is 2.7. following windows.
Step 1: Install VMware Player (which is free for Network window: This shows all the motes in the
academic and personal use). simulated network.
Step 2: Download the Instant Contiki virtual image of Timeline window: This shows all the events over the time.
size 2.5 GB, approximately (http://sourceforge.net/projects/ Mote output window: All serial port outputs will be
contiki/files/Instant%20Contiki/) and unzip it. shown here.
Notes window: User notes information can be put here. Figure 4: Simulation window
Simulation control window: Users can start, stop and
pause the simulation from here. the Contiki application and select
/home/user/contiki/examples/hello-world/hello-world.c.
Adding the sensor motes Then, click Compile.
Once the simulation window is opened, motes can be added to Step 3: Once compiled without errors, click Create (Figure 5).
the simulation using Menu: Motes-> Add Motes. Since we are Step 4: Now the screen asks you to enter the number of
adding the motes for the first time, the type of mote has to be motes to be created and their positions (random, ellipse, linear
specified. There are more than 10 types of motes supported by or manual positions).
Contiki. Here are some of them: In this example, 10 motes are created. Click the Start
MicaZ button in the Simulation Control window and enable the
Sky mote's Log Output: printf() statements in the View menu of
Trxeb1120 the Network window. The Network window shows the output
Trxeb2520 Hello World in the sensors. Figure 6 illustrates this.
cc430 This is a simple output of the Network window. If the real
ESB MicaZ motes are connected, the Hello World will be displayed
eth11 in the LCD panel of the sensor motes. The overall output is
Exp2420 shown in Figure 7.
Exp1101 The output of the above Hello World application can also
Exp1120 be run using the terminal.
WisMote To compile and test the program, go into the hello-
Z1 world directory:
Contiki will generate object codes for these motes to run
on the real hardware and also to run on the simulator if the pradeep@localhost $] cd /home/user/contiki/examples/hello-
hardware platform is not available. world
Step 1: To add a mote, go to Add MotesSelect any of pradeep@localhost $] make
the motes given aboveMicaZ mote. You will get the screen
shown in Figure 5. This will compile the Hello World program in the native
Step 2: Cooja opens the Create Mote Type dialogue target, which causes the entire Contiki operating system and
box, which gives the name of the mote type as well as the the Hello World application to be compiled into a single
Contiki application that the mote type will run. For this program that can be run by typing the following command
example, click the button on the right hand side to choose (depicted in Figure 8):
Figure 5: Mote creation and compilation in Contiki Figure 7: Simulation window of Contiki
#include "contiki.h"
#include <stdio.h> /* For printf() */
/*-----------------------------------------------------------
----------------*/
PROCESS(hello_world_process, "Hello world process");
AUTOSTART_PROCESSES(&hello_world_process);
/*-----------------------------------------------------------
----------------*/
Figure 6: Log output in motes PROCESS_THREAD(hello_world_process, ev, data)
{
pradeep@localhost$] ./hello-world.native PROCESS_BEGIN();
This will print out the following text: printf("Hello, world\n");
Contiki initiated, now starting process scheduling PROCESS_END();
Hello, world }
The program will then appear to hang, and must be The Internet of Things is an emerging technology that leads
stopped by pressing Control + C. to concepts like smart cities, smart homes, etc. Implementing
the IoT is a real challenge but the Contiki OS can be of great
Developing new modules help here. It can be very useful for deploying applications like
Contiki comes with numerous pre-built modules like automatic lighting systems in buildings, smart refrigerators,
IPv6, IPV6 UDP, hello world, sensor nets, EEPROM, wearable computing systems, domestic power management for
IRC, Ping, Ping-IPv6, etc. These modules can run with homes and offices, etc.
all the sensors irrespective of their make. Also, there
are modules that run only on specific sensors. For
References
example, the energy of a sky mote can be used only on
[1] http://www.contiki-os.org/
Sky Motes and gives errors if run with other motes like
Z1 or MicaZ.
Developers can build new modules for various sensor By: T S Pradeep Kumar
motes that can be used with different sensor BSPs using The author is a professor at VIT University, Chennai. He has two
conventional C programming, and then be deployed in the websites http://www.nsnam.com and http://www.pradeepkumar.
org. He can be contacted at [email protected].
corresponding sensors.
This article introduces the reader to Nix, a reliable, multi-user, multi-version, portable,
reproducible and purely functional package manager. Software enthusiasts will find it a
powerful package manager for Linux and UNIX systems.
L
inux is versatile and full of choices. Every other day other systems. Nixpkgs, the Nix packages collection,
you wake up to hear about a new distro. Most of these contains thousands of packages, many pre-compiled.
are based on a more famous distro and use its package
manager. There are many package managers like Zypper and Installation
Yum for Red Hat-based systems; Aptitude and apt-get for Installation is pretty straightforward for Linux and Macs;
Debian-based systems; and others like Pacman and Emerge. No everything is handled magically for you by a script, but there
matter how many package managers you have, you may still run are some pre-requisites like sudo, curl and bash, so make sure
into dependency hell or you may not be able to install multiple you have them installed before moving on. Type the following
versions of the same package, especially for tinkering and command at a terminal:
testing. If you frequently mess up your system, you should try
out Nix, which is more than just another package manager. bash <(curl https://nixos.org/nix/install)
Nix is a purely functional package manager. According
to its site, Nix is a powerful package manager for Linux It will ask for sudo access to create a directory named Nix.
and other UNIX systems that makes package management You may see something similar to whats shown in Figure 1.
reliable and reproducible. It provides atomic upgrades and There are binary packages available for Nix but we are
roll-backs, side-by-side installation of multiple versions of a looking for a new package manager, so using another package
package, multi-user package management and easy set-up of manager to install it is bad form (though you can, if you want
build environments. Here are some reasons for which the site to). If you are running another distro with no binary packages
recommends you ought to try Nix. while also running Darwin or OpenBSD, you have the option
Reliable: Nixs purely functional approach ensures that of installing it from source. To set the environment variables
installing or upgrading one package cannot break other right, use the following command:
packages.
Reproducible: Nix builds packages in isolation from each ./~/.nix-profile/etc/profile.d/nix.sh
other. This ensures that they are reproducible and do not
have undeclared dependencies. So if a package works on Usage
one machine, it will also work on another. Now that we have Nix installed, lets use it for further testing.
Its great for developers: Nix makes it simple to set up To see a list of installable packages, run the following:
and share build environments for your projects, regardless
of what programming languages and tools youre using. nix-env -qa
Multi-user, multi-version: Nix supports multi-user
package management. Multiple users can share a common This will list the installable packages. To search for a
Nix store securely without the need to have root privileges specific package, pipe the output of the previous command
to install software, and can install and use different to Grep with the name of the target package as the argument.
versions of a package. Lets search for Ruby, with the following command:
Source/binary model: Conceptually, Nix builds packages
from source, but can transparently use binaries from a nix-env -qa | grep ruby
binary cache, if available.
Portable: Nix runs on Linux, Mac OS X, FreeBSD and It informs us that there are three versions of Ruby available.
nix-env - i ruby-2.0.0-p353
nixpkgs.ruby2
By:
The Anil
author is a Kumar Pugalia
C++ lover and a Rubyist. His areas of interest include robotics,
programming and Web development. He can be reached at [email protected].
ruby-2.0.0-p353
I
n higher mathematics, transforms play an important role. (%o1) 1/s
A transform is mathematical logic to transform or convert (%i2) string(laplace(t, t, s));
a mathematical expression into another mathematical (%o2) 1/s^2
expression, typically from one domain to another. Laplace (%i3) string(laplace(t^2, t, s));
and Fourier are two very common examples, transforming (%o3) 2/s^3
from the time domain to the frequency domain. In general, (%i4) string(laplace(t+1, t, s));
such transforms have their corresponding inverse transforms. (%o4) 1/s+1/s^2
And this combination of direct and inverse transforms is very (%i5) string(laplace(t^n, t, s));
powerful in solving many real life engineering problems. The Is n + 1 positive, negative, or zero?
focus of this article is Laplace and its inverse transform, along
with some problem-solving insights. p; /* Our input */
(%o5) gamma(n+1)*s^(-n-1)
The Laplace transform (%i6) string(laplace(t^n, t, s));
Mathematically, the Laplace transform F(s) of a function f(t) Is n + 1 positive, negative, or zero?
is defined as follows:
n; /* Our input */
(%o6) gamma_incomplete(n+1,0)*s^(-n-1)
where t represents time and s represents complex (%i7) string(laplace(t^n, t, s));
angular frequency. Is n + 1 positive, negative, or zero?
To demonstrate it, lets take a simple example of f(t) = 1.
Substituting and integrating, we get F(s) = 1/s. Maxima has z; /* Our input, making it non-solvable */
the function laplace() to do the same. In fact, with that, we (%o7) laplace(t^n,t,s)
can choose to let our variables t and s be anything else as (%i8) string(laplace(1/t, t, s)); /* Non-solvable */
well. But, as per our mathematical notations, preserving them (%o8) laplace(1/t,t,s)
as t and s would be the most appropriate. Lets start with (%i9) string(laplace(1/t^2, t, s)); /* Non-solvable */
some basic Laplace transforms. (Note that string() has been (%o9) laplace(1/t^2,t,s)
used to just flatten the expression.) (%i10) quit();
e ll w ith Zsh
Your Sh h
and O h -M y- Z s
ng
e Z sh e ll , a p owerful scripti ive use.
Discover th for interact
n g u age, wh ic h is designed
la
Z
shell (zsh) is a powerful interactive login shell
and command interpreter for shell scripting. A big
improvement over older shells, it has a lot of new
features and the support of the Oh-My-Zsh framework that
makes using the terminal fun.
Released in 1990, the zsh shell is fairly new compared
to its older counterpart, the bash shell. Although more than
a decade has passed since its release, it is still very popular
among programmers and developers who use the command-
line interface on a daily basis.
setopt EXTENDEDGLOB
# Enables extended globbing in zsh.
ls *(.) Figure 2: Tab completion for files
# Displays all regular files. commands. Most other shells have aliases but zsh supports
ls -d ^*.c # Displays global aliases. These are aliases that are substituted anywhere
all directories and files that are not cpp files. in the line. Global aliases can be used to abbreviate
ls -d ^*.* # Displays frequently-typed usernames, hostnames, etc. Here are some
directories and files that have no extension. examples of aliases:
ls -d ^file # Displays
everything in directory except file called file. alias -g mr=rm
ls -d *.^c alias -g TL=| tail -10
# Displays files with extensions except .c files. alias -g NUL=> /dev/null 2>&1
you desire and then source Oh-My-Zsh. If you do not want any
theme enabled, set ZSH_THEME = . If you cant decide
Figure 5: Setting aliases in ~/.zshrc file on a theme, you can set ZSH_THEME = random. This will
change the theme every time you open a shell and you can
To install it via wget, type: decide upon the one that you find most suitable for your needs.
To make your own theme, copy any one of the existing
wget no-check-certificate http://install.ohmyz.sh -O - | sh themes from the themes/ directory to a new file with a zsh-
theme extension and make your changes to that.
To customise zsh, create a new zsh configuration, i.e., a A customised theme is shown in Figure 6.
~/.zshrc file by copying any of the existing templates provided: Here, the user name, represented by %n, has been set to
the colour green and the computer name, represented by %m,
cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc has been set to the colour cyan. This is followed by the path
represented by %d. The prompt variable then looks like this...
Restart your zsh terminal to view the changes.
PROMPT= $fg[green]%n $fg[red]at $fg[cyan]%m---
Plugins >$fg[yellow]%d:
To check out the numerous plugins offered in Oh-My-Zsh,
you can go to the plugins directory in ~/.oh-my-zsh. The prompt can be changed to incorporate spacing, and
To enable these plugins, add them to the ~/.zshrc file and git states, battery charge, etc, by declaring functions that do
then source them. the same.
For example, here, instead of printing the entire path
cd ~/.oh-my-zsh including /home/darshana, we can define a function such that
vim ~/.zshrc if PWD detects $HOME, it replaces the same with ~
source ~/.zshrc
function get_pwd() {
If you want to install some plugin that is not present in echo ${PWD/$HOME/~}
the plugins directory, you can clone the plugin from Github or }
install it using wget or curl and then source the plugin.
To view the status of the current Git repository, the
Themes following code can be used:
To view the themes in zsh go to the themes/ directory. To
change your theme, set ZSH_THEME in ~/.zshrc to the theme function git_prompt_info() {
P
anasonic is all set to launch 15 smartphones and Coming soon: An exclusive Panasonic app store
eight feature phones in India this year. While the Well, if you thought the unique user experience was
company will keep its focus on the smartphone the end of the show, hold on. Theres more coming
segment, it has no plans of losing its feature phone The company plans to leave no stone unturned
lovers as Panasonic believes that there is still scope for when it comes to making its Android experience
the latter in the Indian market. That said, Panasonic will complete for the Indian region. Rana reveals, We
invest more energy in grabbing what it hopes will be a are planning to come up with a Panasonic exclusive
5 per cent share in the Indian smartphone market. And app store, which should come to existence in the
that will happen with the help of Android. Speaking next 3-4 months.
Your own notepad and the output will be something like whats shown
Here is a simple and fast method to create a notepad- below:
like application that works in your Web browser. All you
need is a browser that supports HTML 5 and the commands Filename Type Size Used Priority
mentioned below.
Open your HTML 5 supported Web browser and paste /dev/sda5 partition 2110460 0 -1
the following code in the address bar:
Here, the swap is a partition and not a file.
data:text/html, <html contenteditable>
Sharad Chhetri,
Then use the following code: [email protected]
data:text/html, <style>html,body{margin: 0; padding: 0;}</ This command will show the CPU utilisation, memory
style><textarea style=font-size: 1.5em; line-height: utilisation and IO utilisation of the process, along with the PID.
1.5em; background: %23000; color: %233a3; width: 100%; Example:
We are
looking to hire
people with
core Android
experience
Before Xiaomi was to enter the
Indian market, many assumed
that this was just another Chinese
smartphone coming their way. But
perceptions changed after the brand
entered the sub-continent. Flipkart
got bombarded with orders and
Xiaomi eventually could not meet
the Indian demand. There are quite
a few reasons for this explosive
demand, but one of the most
important factors is the unique user
experience that the device offers. It
runs on Android, but on a different
versionone that originates from the
brain of Hugo Barra, vice president,
Xiaomi. When he was at Google,
he was pretty much instrumental
in making the Android OS what
it is. Currently, he is focused on
offering a different taste of it
at a unique price point. He has
launched MIUI, an Android-based
OS that he wants to be ported to
devices other than Xiaomi. For this,
he needs a lot of help from the
developers community. Diksha P
Gupta from Open Source For You
caught up with him to discuss his
plans for India and how he wants
to contribute to and leverage
the developers ecosystem in the
country. Read on...
Q What are the top features of Xiaomi MIUI that you think
are lacking in other devices?
First of all, we have a dramatically simplified UI for the
you may want to share this Wi-Fi network with someone
else. So, you just go into the Wi-Fi wing and say, Share
this connection, and the phone will then share a QR code.
average person; so it feels simpler than anything else in the The person you want to share your Wi-Fi connection with
market right now. can just scan this QR code and immediately get connected
Second, it is very customisable and is really appealing to the network without having to enter a password. So
to our customers. We have thousands of themes that lots and lots of little things like that add up to a pretty
can completely change the experience, not just the wall delightful experience.
paper or the lock screen. From very detailed to very
minimalistic designs, from cartoonist styles to cultural
statements and on to other things, there is a huge list of
options to choose from.
Q After MIUI from Xiaomi, Panasonic has launched its own
UI and Xolo has launched HIVE. So do you think the war
has now shifted to the UI level?
Third, I would say that theres customisation for power I think it would be an injustice to say that our operating
users as well. You can change a lot of things in the system. system MIUI is just another UI like Android because it
You can control permissions on apps, and you can decide is so much more than that. We have had a team of 500
which apps are allowed to run in the background. There engineers working on our operating systems for the last
is a lot you can do to fine tune the performance of your four years, so it is not just a re-skinning of Android. It is
device if you choose to. For example, you can decide much, much more significant effort. I can spend five hours
which apps are allowed to access the 3G network. So I with you just explaining the features of MIUI. I dont think
can say that out of the 45 apps that I have running on there are many companies out there that have as significant
my phone, the only ones that are allowed to use 3G are a software effort that has been on for as long a time, as we
WhatsApp, Hike, my email and my browser. I dont want have. So while I havent looked at these operating systems
any of the other apps that are running on this phone to be that you are talking about closely, my instinct is that they
allowed to access 3G at all, which I wont know about and are not as profoundly different and well founded as MIUI.
which may use bandwidth that I am paying for. It is a very
simple menu. Like checkboxes, it lets you choose the apps
that you want to allow 3G access to. So if you care about
how much bandwidth you are consuming and presently
Q What are your plans to reach out to the developers?
From a development perspective, first and foremost, we
are very Android compliant. All of our builds, before OTA,
control that by turning 3G on and off (which people do all go to Google for approval, like every other OEM out there.
the time), now you can allow 3G access only to messaging We are focused on Android APIs. We are not building new
apps like WhatsApp or Hike that use almost no bandwidth APIs. We believe that doing so would create fragmentation.
at all. Those are the apps that youre all the time connected Its kind of not ideal and goes against the ecosystem. So,
to because if someone sends you a message, you want to from our point of view, we see developers as our early
get it as soon as possible. adopters. They are the people who we think are best
Fourth, we have added a number of features to the core equipped to try our products. We see developers as the first
apps that make them more interesting. These include in people that we take our devices to try out. Thats primarily
call features that allow users to take notes during a phone how we view the developer community.
call, the ability to record a phone call and a bunch of other There are some interesting twists out there as well. For
things. So it is not the dialler app alone, but also dozens of instance, we are the first company to make a Tegra K1 tablet.
features all around the OS like turning on the flash light So, already, MiPad is being used by game developers as the
from the lock screen, having a private messaging inbox and reference development platform for K1 game tabs. This is one
a whole lot of other features. of the few ways in which we get in touch with the developers
Fifth, on your text inbox, you can pin a person to the and work with them.
topif there is really someone who matters to you and you
always want to have their messages on the top. You can
decide at what time an SMS needs to be sent out. You can
compose a message saying, I want this message to go out
Q How do you want to involve the Indian developer
community, considering the fact that it is one of the
largest in the world?
at 7 p.m., because maybe youre going to be asleep, for First of all, we are looking to hire developers here. We are
example, but still want that message to go out. looking to build a software engineering team in India, and
Then there are little things like, if you fire up the in Bengaluru, to be precisewhere we are headquartered.
camera app and point the camera towards the QR code, So that is the first and the most important step for us. The
it just automatically recognises it. You dont have to second angle is MIUI. Its not an open source operating
download a special app just to recognise the QR codes. If system, but it is an open operating system that is based on
you are connected to a Wi-Fi network with your Mi phone, Android. A lot of the actual code is closed, but its open