Webpdf
Webpdf
Webpdf
Editors
Rajesh Vanchipura
Associate Professor, Department of Mechanical Engineering,
Government Engineering College Trichur, Thrissur, Kerala, India
K.S. Jiji
Associate Professor, Department of Electrical Engineering,
Government Engineering College Trichur, Thrissur, Kerala, India
All rights reserved. No part of this publication or the information contained herein may
be reproduced, stored in a retrieval system, or transmitted in any form or by any means,
electronic, mechanical, by photocopying, recording or otherwise, without written prior
permission from the publisher.
Although all care is taken to ensure integrity and the quality of this publication and the
information herein, no responsibility is assumed by the publishers nor the author for any
damage to the property or persons as a result of operation or use of this publication and/or
the information contained herein.
Table of contents
Preface xv
Plenary sessions and keynote speeches xvii
Conference secretariat and committees xix
Editors xxiii
vi
vii
viii
ix
Generic object detection in image using SIFT, GIST and SURF descriptors 707
D.P. Joy, K.S. Shanthini & K.V. Priyaja
xi
xii
xiii
Preface
The 5th Biennial International Conference on ‘Emerging Trends in Engineering Science and
Technology’, ICETEST 2018, was held during 18th to 20th January 2018 at Government
Engineering College, Trichur, Kerala, India. ICETEST is an international interdisciplinary
conference covering research and developments in different domains of Engineering, Archi-
tecture and Management. The distinctive feature of ICETEST 2018 is that it is a culmination
of seven sub conferences; CEASIDE for Civil Engineering, PDME for Mechanical Engi-
neering, PICC for Electrical Engineering, ICAChE for Chemical Engineering, E-SPACE for
Electronics and Communication Engineering, ICETICS for Computer Science Engineering
and ICCC for School of Architecture. In spite of having seven different sub conferences,
there had been an underlying specific common theme – Society, Energy & Environment. Of
late, the energy and environment concern has reached its peak, and researchers in academia
need to support the industry and society through socially and environmentally sustainable
solutions.
ICETEST 2018 was organized at Government Engineering College Thrissur (GECT) to
discuss and disseminate the major advances in technologies for Society, Energy and Environ-
ment. Government Engineering College Trichur, (GECT) established in 1958, is one of the
premier institutions for quality technical education in the state of Kerala, India. GECT offers
undergraduate and post graduate programmes in seven Engineering disciplines and Architec-
ture. Further, GECT is a major research center for four branches of Engineering under the
University of Calicut, and is an approved centre for research under the Quality Improvement
Programme of Ministry for Human Resource Development, Govt. of India. The fifth edition
of biennial international conference ICETEST 2018 is organized as part of Diamond Jubilee
celebrations of the institute.
More than 480 pre-registered authors submitted their manuscripts in the conference under
seven different sub conferences. ICETEST 2018 accepted 258 papers after double blind peer
review process. Finally, 251 authors presented their works in the conference. There were dele-
gates and contributions from countries like, USA, Canada, Korea, Malaysia, UAE and Africa.
Cultural and entertainment programs were arranged along with Gala dinner for the del-
egates on second day evening. Cultural show by world famous Indian institute of Art and
Culture, the ‘Kerala kalamandalam’ was the highlight, which provided a glimpse of Kerala’s
rich cultural heritage. These programs in fact made the technical event richer.
The primary objectives of ICETEST 2018 was to bring the emerging technologies from
various disciplines of Engineering, Science & Technology under the theme Society, Energy
and Environment for the betterment of society and preserving nature. The conference has
provided a platform for showcasing and interacting researchers from different domains.
There were 51 paper presentation sessions in different themes under seven sub conferences,
which were conducted in 13 venues within the institution, with 323 delegates and 748 authors.
For, international delegates, it provided an opportunity to visit Kerala, a place commonly
known as ‘God’s own country’ apart from enjoying the technical fiesta.
xv
There were three plenary sessions and an Editorial and Publishing Workshop at the main
venue.
1. ‘Power Electronics for Renewable Energy and Power Systems: Opportunities and Chal-
lenges’ by Dr. Jian Sun, Professor, Department of Electrical, Computer, and Systems
Engineering (ECSE), Rensselaer Polytechnic Institute, USA.
2. ‘Biomass for Fuels and Chemicals’ by Dr. Suchithra T.G., University of Nottingham,
Malaysia.
3. ‘Power electronics application in alternate energy systems’, by Prof. Ashoka K.S. Bhat,
Professor, Department of Electrical Engineering, University of Victoria, B.C., Canada.
4. ‘Editorial Workshop’ by Dr. Gagandeep Singh Editorial manager for the CRC Press,
Taylor & Francis Group.
Apart from plenary, there were 19 keynote speeches organized by seven constituent sub con-
ferences, with experts from India and abroad. Wide range topics that the keynote sessions
covered and highlighted engross the developments from different domains and challenges
with reference to the theme.
We thank all the distinguished delegates who participated in ICETEST 2018 especially:
• Padma Bhushan Dr. K. Radhakrishnan, Honorary Distinguished Advisor, Department of
Space/ISRO, Govt of India for inaugurating the conference and for consenting to be the
chief guest of the inaugural function.
• Padmasree M. Chandradathan, Scientific Advisor to Chief Minister, Govt. of Kerala, for
consenting to be the chief guest of the valedictory function.
• Renowned publication houses, CRC press, Taylor and Francis and IEEE for their com-
munication and sponsorship.
• The funding agencies TEQIP, AICTE, Govt. of India, DTE and KSCSTE, Govt. of Ker-
ala for the financial support which materialized this event.
• Dr. K.P. Indiradevi, Director of Technical Education for the state of Kerala, for the invalu-
able support and directions.
• Dr. B. Jayanand, the Principal GECT, and Diamond Jubilee conveners, Dr. N. Sajikumar,
Head of Civil Engineering Dr. C.P. Sunilkumar, Professor and Dean UG studies, Dr. M.
Nandakumar, Head of Electrical Engineering, Dr. Thajudin Ahamed V.I., Head of Elec-
tronics and communication Engineering and Dr. V.P. Mohandas, Professor in Mechanical
Engineering, for their valuable guidance and support.
• The members of organising committee Dr. E.A. Subaida, Dept. of Civil Engg. (CEA-
SIDE), Dr. Sudheesh R.S., Dept. of Mechanical Engg. (PDME), Dr. Jaison Mathew, Dept.
of Electrical Engg. (PICC), Dr. Subin Poulose, Dept. of Chemical Engg., (ICAChE), Prof.
Mohanan K.P., Dept. of Electronics & Communication Engg. (E-SPACE), Dr. Swaraj
K.P., Dept. of Computer Science & Engg. (ICETICS) and Dr. Ranjini Bhattathiripad
T., School of Architecture (ICCC) for their help, support, commitment and participation
before, during and after the conference.
• All the conveners and members of various committees, reviewers, volunteers, Faculty,
Staff and Students of GECT for their efforts in materialising the event, ICETEST 2018.
xvii
CHIEF PATRONS
PATRONS
ORGANISING SECRETARY
JOINT SECRETARY
TREASURER
ORGANIZING COMMITTEE
xix
xxi
Editors
xxiii
ABSTRACT: Cement is the main constituent present in the production of concrete. The
production of Ordinary Portland Cement (OPC) leads to huge emission of green house gases
such as CO2. Fly ash based geopolymer binders are an innovative alternative to OPC which
can provide high strength. Much research has been conducted on the properties of fly ash
based geopolymer concrete hardened by heat curing, which is considered as a limitation for
cast in situ applications of geopolymers. The aim of this study is to generate geopolymers
cured without elevated heat. For this Ground Granulated Blast Furnace Slag (GGBFS) is
also used as a binder along with fly ash in different percentages. The result obtained revealed
that the addition of GGBFS in fly ash based geopolymer concrete enhanced its mechanical
properties in an ambient curing condition.
Keywords: fluid–Geopolymer concrete, fly ash, blast furnace slag, ambient curing
1 INTRODUCTION
Concrete is a widely used construction material all over the world. To reduce the environmental
impact caused by the production of cement, it is necessary to develop some environmentally
friendly alternative materials. For this a material called geopolymer, synthesized by alkali
activation of aluminosilicate compounds can be used. Geopolymer is a binder material which
can be a sustainable and economical binding material as it is produced from industrial by-
products such as fly ash replacing 100% of cement in concrete. As compared with Portland
cement, geopolymers have very low CO2 emissions (Partha Sarathi et al. 2014).
Low calcium fly ash (class F) is used as a suitable material for geopolymer because of its
wide availability and lower water demand. From previous investigations it is clear that heat
cured low calcium fly ash based geopolymer concrete (GPC) has shown excellent mechani-
cal and durability properties (Karthik A. et al. 2017). The only limitation of fly ash based
GPC is that, it requires heat curing, so this cannot be used for cast in-situ applications at low
ambient temperature.
To widen the use of GPC beyond precast applications it is necessary to produce GPC mem-
bers cured in ambient conditions. For this Ground Granulated Blast Furnace Slag (GGBFS)
can be used as a binder along with fly ash (Pradip Nath et al. 2014). GGBFS is a by-product
from the steel industry with a high calcium content compared to fly ash. Blast furnace slag
is formed during the production of hot metals in blast furnaces. If the molten slag is cooled
and solidified by rapid quenching, GGBFS is formed. Only a few research papers are avail-
able regarding slag based geopolymer concrete, and do not provide any clear evaluation of
data. Thus to establish the relevance of GGBFS in GPC it is necessary to thoroughly study
its mechanical properties.
2 EXPERIMENTAL PROGRAM
The experiment program consists of developing slag based GPC by replacing the amount
of fly ash in GPC by various percentage such as 10%, 20%, 30% and 40%. To study the
2.4 Testing
The workability of fresh geopolymer concrete was determined by a slump test. After seven
days of casting, a compressive strength test, splitting tensile strength test and flexural strength
test were carried out using a compression testing machine with a capacity of 3000 kN.
Mix M M1 M2 M3 M4
Flexural %
Mix strength variation
M 4.3 0
M1 4.8 11.63
M2 5.4 25.58
M3 7.8 81.4
M4 7.2 67.44
Figure 1. Percentage increase in strengths.
4 CONCLUSIONS
REFERENCES
Aradhana Mehta, Kuldeep Kumar (2016). Strength and durability characteristics of fly ash and slag
based geopolymer concrete, International Journal of Civil Engineering & Technology (IJCIET)
Research, 305–314.
Ganesan, N., Ruby Abraham & Deepa Raj S. (2015). Durability charecteristics of steel fibre reinforced
geopolymer concrete, Construction and Building Materials, 93, 471–476.
Karthik A., Sudalaimani K. & Vijaya Kumar C.T. (2017). Investigation on mechanical properties of fly
ash-ground granulated blast furnace slag based self curing bio-geopolymer concrete, Construction
and Building Materials, 149, 338–349.
Partha Sarathi, Pradip Nath, Prabir Kumar Sarker. (2014). The effect of ground granulated blast-furnace
slag blending with fly ash and activator content on the workability and strength of geopolymer
concrete cured at ambient temperature, Materials and Design, 62, 32–39.
Phoo-Ngernkham, T., Chindaprasirt, P., Sata, V. & Sinsiri, T. (2013). High calcium fly ash geopolymer
containing diatomite as additive, Indian Journal of Engineering Material Science, 20(4), 310–318.
Pouhet Raphaelle, Cyr Martin. (2016). “Formulation and performance of fly ash metakaolin geopolymer
concretes, Construction and Building Materials, 120, 150–160.
Pradip Nath, Prabir Kumar Sarkar. (2014). Effect of GGBFS on setting, workability, and early strength
properties of fly ash geopolymer concrete cured in ambient condition, Construction and Building
Materials, 66, 163–171.
Rashad, A.M. (2013). A comprehensive overview about the influence of different additives on the prop-
erties of alkali activated slag: A guide for civil engineer, Construction and Building Materials, 47,
29–55.
1 INTRODUCTION
The modulus of elasticity shows the influence of material strain subjected to loading and
deflection due to loading. It is a very important parameter of a material to predict its struc-
tural behavior. It is a very important mechanical property of concrete. As the modulus of
elasticity increases, the material becomes stiffer and the deflection experienced by the struc-
ture decreases. The modulus of elasticity of concrete shall be calculated from a compressive
strength test. Stresses and strains are obtained from the strength tests and plotted. The elastic
modulus is then measured as the slope of the curve at 40% of the stress value at the ultimate
load. It is necessary to obtain the value of the modulus of elasticity to perform simulation of
the material behavior. The modulus of elasticity and compressive strength possess a relation-
ship, and both increase with the progress of the hydration process of cement in the case of
concrete. Hence, it is important to monitor the parameter.
Concrete is the most widely used construction material all over the world. Concrete struc-
tures are often subject to cracks which lead to durability problems for such structures. Cracks
can develop at any stage of the service life of concrete due to shrinkage, loading, weathering
and so on. The cracks not only permit the entry of corrosive fluids but also aid in further
propagation and damage the structure. Micro-cracks in concrete could allow entry of water
and other impurities like chloride and sulfate ions into the concrete, which lead to degra-
dation of the concrete matrix and corrosion of embedded reinforcement. This affects the
structural integrity of concrete. The expenses incurred in maintenance and repair of concrete
structures subject to cracks are very high. Many techniques are being used to arrest and
repair these cracks to enhance the durability of such structures. One such technique is bio-
engineered concrete which is intrinsic in nature. This could be developed by adding a special
type of calcite precipitating bacteria to concrete. The bio-mineralization capacity of bacteria
is utilized in this technique to fill the cracks. By conducting feasibility studies of different
aspects, various researchers have concluded that self-healing behavior can be achieved by
impregnating calcite precipitating bacteria into concrete (Jonkers & Schalengon, 2009).
It is important to analyze the relationship between the modulus of elasticity and compres-
sive strength of bacterial concrete and compare it with that of conventional concrete for the
2 LITERATURE REVIEW
3.1 Materials
The materials used in this study include the following:
3.1.1 Cement
Ordinary Portland Cement (53 grade), which has been tested for various properties as per
IS:4031-1988 and found to be conforming to various specifications of IS:12269-2013 was used.
3.1.4 Water
Locally available potable waterconforming to standards specified in IS:456-2000 was used for
mixing and curing.
3.1.5 Microorganism
Bacillus subtilis (B. subtilis), a laboratory cultured bacterium collected from Kerala
Agricultural University Mannuthy, was used. The bacteria, in suspension with a concentration
8
Water–
Concrete Fine Coarse cement
grade Cement aggregates aggregates ratio
of 108 cells/ml, was collected and an optimum concentration of 105 cells/ml bacteria was
obtained from the sample.
3.3 Specimens
Test specimens consisted of standard cube specimens of side 150 mm and standard cylinder
specimens of dimensions 150 mm diameter and 300 mm height.
3.5 R
elationship between Young’s modulus and characteristic compressive strength
of bacterial concrete
The tests to find the elastic modulus and compressive strength of concrete were conducted
in a compression testing machine. To determine the elastic modulus, a compressometer was
attached to the test cylinder. A zero reading was affirmed before exposing the cylinder to
uniaxial compression. For an interval of every 20 kN load, deflection values on the cylinder
were noted. A stress–strain graph was plotted for the load and deflection values. Then, the
elastic modulus was determined by the slope of the line drawn to 40% of the stress value at
the ultimate load. The values of the elastic modulus corresponding to the characteristic com-
pressive strength were compared by plotting a graph of the same.
4 RESULTS
where,
E is the modulus of elasticity of concrete,
fck is the characteristic compressive strength of concrete.
10
Figure 4. Relationship between Young’s modulus and characteristic compressive strength of bacterial
concrete and governing equation.
5 VALIDATION
The validation of the equation obtained from the study was conducted with the experimental
investigations as per (Ganesh & Siddiraju, 2010). In their paper, for M40 grade bacterial
concrete, they experimentally obtained a value for the modulus of elasticity of concrete of
34235 N/mm². From the governing equation of the relationship between the modulus of
elasticity and characteristic compressive strength of bacterial concrete as per Equation 1,
the value of the modulus of elasticity for M40 grade of concrete obtained is 35506.4 N/mm2,
which has a percentage variation of only 3.8% from the value observed in the study. Hence,
it could be inferred that the governing equation of the relationship between the modulus of
elasticity and characteristic compressive strength of bacterial concrete holds good.
11
This experimental study shows that impregnation of B. subtilis into concrete shows
improvements in the compressive strength and modulus of elasticity of concrete. On carrying
out compressive strength tests on M15, M20 and M25 grades of concrete, an average increase
in compressive strength of 15.8% was observed which is a substantial enhancement of the
strength parameter. Investigation of elastic properties of M15, M20 and M25 grades of
concrete showed a sizable enhancement of the modulus of elasticity value of 5.6% on average.
The governing equation of the relationship between Young’s modulus and characteristic
compressive strength of bacterial concrete appears reliable on validation with the reference
literature, with a variation of 3.8% which could indicate that the governing equation holds
good. Results overall show a significant improvement in properties of concrete and evidently
proves bacterial concrete to be a feasible solution, while shedding light on future scope for
use in construction.
REFERENCES
Ganesh B.N., Siddiraju S., (2016). An experimental study on strength and fracture properties of self
healing concrete. International Journal of Civil Engineering and Technology (IJCIET), 7(3), 398–406.
Gavimath C.C., Mali B.M., Hooli V.R., Patil A.B., (2011). Potential application of bacteria to improve
the strength of cement concrete. International Journal of Advanced Biotechnology and Research, 3(1),
541–544.
Jonkers H.M., Arjan Thijssen, Gerad Muyzer, Oguzhan Copuroglu, Erik Schlangen, (2010). Appli-
cation of bacteria as self healing agent for the development of sustainable concrete, Ecological
Engineering, 36(2), 230–235.
Jonkers H.M., Erik Schalengon, (2009). A two composite bacteria based self healing concrete. Concrete
Repair, Rehabilitation and Retrofitting, 2(1), 215–220.
Kim V.T., Elke Gruyert, Hubert Rahier, Nele D.B., (2012). Influence of mix composition on the extent
of autogenous crack healing by continued hydration or calcium carbonate formation. Construction &
Building Materials, 37(1), 349–359.
Kim V.T., Nele D.B., Willem D.M., (2010). Use of bacteria to repair cracks in concrete Cement Concrete
and Research, 40(1), 157–166.
Klaas V.B., (2012). Self-healing material concepts as solution for aging infrastructure. 37th Conference
on Our World in Concrete & Structures, Singapore, 8(1), 29–31.
Krystian J., Stefania G., (2015). The influence of concrete composition on Young’s modulus. Procedia
Engineering, 108(1), 584–591.
Meera C.M., Dr. Subha V., (2016). Strength and durability assessment of bacteria based self-healing
concrete. IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE), Proceedings of the
International Conference on Emerging Trends in Engineering and Management, 1(1), 1–7.
Sedat K., Leyla T., Halit Y.E., (2013). Young’s modulus of fiber-reinforced and polymer-modified
lightweight concrete composite, Construction & Building Materials, 22(6), 1019–1028.
Song G., Ma N. & Li H.N. (2006). Applications of shape memory alloys in civil structures. Engineering
Structures, 28(19), 1266–1274.
Sonja T., Gan S.N. & Noor H.A.K., (2011). Optimization of microencapsulation process for self-healing
polymeric material. Sains Malaysiana, 40(7), 795–802.
Trask R.S., Williams G.J. & Bond, I.P. (2007), Bioinspired self-healing of advanced composite structures
using hollow glass fibres. Journal of the Royal Society Interface, 4(1), 363–371.
Virginie Wiktor, & Henk M. Jonkers, (2011). Quantification of crack healing in novel bacteria based self
healing concrete. Cement and Concrete Composites, 33(7), 763–770.
Wang J.Y., Tittelboom K.V., Belie N.D., Verstraete W., (2010). Potential of applying bacteria to heal
cracks in concrete. Proceedings of Second International Conference on Sustainable Construction
Materials and Technologies, Universita Politechnica delle Marche, Ancona, Italy, 06(1), 28–30.
12
1 INTRODUCTION
Construction industry is one of the major contributors towards the fast growing Indian econ-
omy. Concrete is the most utilized material by the construction industry due to its excellent
properties and versatility (Sivakumar & Sivaramakrishnan 2016). The per capita consump-
tion of concrete in India is more than 200 kg. This huge demand for concrete, which is
homogeneous, made of heterogeneous materials such as cement, sand and aggregates, creates
heavy exploitation of natural resources, especially aggregates (Patnaik et al. 2014). The large
consumption of natural aggregates beyond the limit will cause damage to the environment.
Conservation of natural resources and preservation of environment being the essence of any
development, the researchers are in constant pursuit for alternate materials that can replace
these natural materials, especially aggregate. Even if there are artificial aggregates currently in
use, their manufacturing creates lots of ecological issues. Another problem arising from con-
tinuous technological and industrial development is the disposal of waste material. Hence,
the last decade witnessed many researches and innovative solutions on the utilization of
waste products in concrete. One such development was the utilization of industrial wastes
and demolition wastes as aggregates, which provides an alternative to the natural and artifi-
cial aggregates (Al-jabri et al. 2009).
Copper is one of the most widely used metal next to steel and aluminium, and the annual
production is about ten lakh tonnes (Tiwari & Saxena 2016). Copper slag is a by-product
obtained during smelting and refining of copper. This waste product is managed by recov-
ering of metal, production of value added products, recycling and disposal by dumping
(Khanzadi & Behnood 2009). Copper slag, shows in its chemical composition high contents
of aluminium, silica and iron oxides, similar to that of pozzolanic materials. Additionally,
its hardness and gradation indicate its suitability as an alternative material for fine aggre-
gate (Ambily et al. 2015). Hence, copper slag can be used as a replacement material for fine
aggregates in production of concrete. By using this in construction field, the pollution due to
other methods of slag disposal can be minimised. Being a waste material, the material cost is
a minimum and will result as a cost effective solution for aggregates (Tamil selvi et al. 2014).
So the use of copper slag in concrete provides ecological as well as economic benefits.
13
2 EXPERIMENTAL PROGRAMME
2.1 Materials
The materials used for the investigation were cement, fine aggregate, coarse aggregate of
12.5 mm nominal size, superplasticizer and water. The constituent materials are tested as per
the methods prescribed by the relevant IS codes.
2.1.1 Cement
Standard tests were conducted as per IS on the 53 grade OPC used for the investigation.
28th day compressive strength obtained was 54 MPa. Other properties obtained as Standard
consistency = 34%, Initial setting time = 130 minutes, Final setting time = 540 minutes. The
results conform to the values specified in IS code 12269:1987.
Constituents % weight
Silica 26–30%
Free silica Less than 0.5%
Table 1. Physical properties of copper slag. Alumina 2%
Iron oxide 42–47%
Colour Black, glassy Calcium oxide 01–02%
Bulk density 3.25 Magnesium oxide 1.04%
PH 6.95 Copper oxides 6.1% max
Specific gravity 3.5 Sulphates 0.13%
Moisture content <0.01% Chlorides Not detected
14
2.1.4 Superplasticiser
Superplasticiser used for the production of the concrete mix was Cera Hyperplast XR–
WR40, a new generation poly-carboxylate base water reducing admixture having a specific
gravity of 1.09.
Fine Coarse
Cement aggregate aggregate Water SP
Kg/m3 Kg/m3 Kg/m3 l/m3 l/m3
15
3.1 Workability
The slump values of M60 mix with different percentage replacement of fines with cop-
per slag is shown in Fig. 3. An increase in workability was observed with increase in
percentage replacement of conventional fines with copper slag. This may be due to the
low water absorption of copper slag. The smooth and glassy texture of copper slag may
have resulted in the reduction of friction between material particles, which in turn results
in a ball bearing effect which increases the fluidity of the mix. In addition, the surface
area of copper slag being lesser, reduced amount of paste may be sufficient for coating the
surface of copper slag, which increases the paste available for the floating of the aggregate
particles.
16
Flexural strength
Specimen (N/mm2)
17
w
G=
Aeff
where,
G = Fracture energy,
W = Total energy dissipated in the test,
Aeff = Effective area of cross section of the specimen.
Fracture toughness is an indication of amount of stress required to propagate a pre-existing
flaw. It describes the ability of a material with a crack to resist fracture. Mode I fracture param-
eters were determined by three point bending test. Load—crack mouth opening displacement
curve for various percentage of copper slag replacing fine aggregate is shown in Fig. 6 to Fig. 9.
The specimens with 20%, 40% and 60% showed 95.4%, 138% and 28% improvement in
fracture energy compared to control specimens. The fracture toughness also exhibited an
improvement of about 40% [Table 5]. This indicates that the energy required for crack ini-
tiation and propagation is more for HSC mixes with copper slag aggregates. The crack tip
18
opening load showed an improvement of 100% and the ultimate load showed an increase
of 67% for a replacement level of 40%. This improvement in fracture parameters can be
mainly attributed to the strength of aggregates and the bond between aggregates and
cement paste.
4 CONCLUSION
1. From the analysis of parameters studied, it can be seen that the properties showed consid-
erable improvement for replacements levels of 20% to 40%.
2. Workability increased as the percentage of copper slag replacing fine aggregate increased.
3. The increase in compressive strength, split tensile strength and flexural strength of
specimens with 40% replacement of fines with copper slag are 5.88%, 30.77% and 50%
respectively.
19
REFERENCES
Ambily P.S., Umarani C., Ravisankar K., Prabhat Ranjan Prem, Bharathkumar B.H. and Nagesh R.
Iyer. 2015. Studies on ultra-high performance concrete incorporating copper slag as fine aggregate,
Construction and building materials, vol. 77, pp. 233–240.
Binaya Patnaik, Seshadri Sekhar, T. and Srinivasa Rao. 2014. An Experimental Investigation on
Optimum Usage of Copper Slag as Fine Aggregate in Copper Slag Admixed Concrete, International
Journal of Current Engineering and Technology, vol. 04, pp. 3646–3648.
Khalifa S. Al-Jabri, Makoto Hisada, Salem K. Al-Oraimi and Abdulla Al-Sidi H. 2009. Copper
slag as sand replacement for high performance concrete, Cement and concrete composites, vol. 31,
pp. 483–488.
Mostafa Khanzadi and Ali Benhood. 2009. Mechanical properties of high-strength concrete incorpo-
rating copper slag as coarse aggregate, Construction and building materials, vol. 23, pp. 2183–2188.
Paresh Tiwari and Anil kumar saxena. 2016. A review on partial substitution of copper slag with sand in
concrete materials, International journal of engineering research and technology, vol. 05, pp. 212–215.
Sivakumar T. and Sivaramakrishnan R. 2016. Experimental Investigation on the Mechanical Proper-
ties of Reinforced Cement Concrete using Copper Slag as Partial Replacement for Fine Aggregates,
International Journal of Current Engineering and Technology, vol. 06, pp. 834–840.
Tamil Selvi P., Lakshmi Narayani P. and Ramya G. 2014. Experimental Study on Concrete Using
Copper Slag as Replacement Material of Fine Aggregate, Civil and environmental engineering,
vol. 4, pp. 1–6.
Wei Wu, Weide Zhang and Guowei Ma. 2010. Optimum content of copper slag as fine aggregate in high
strength concrete, Materials and design, 31, pp. 2878–2883.
20
ABSTRACT: One of the most prominent techniques available for repair and strengthening
of concrete structures is the use of Fiber Reinforced Polymer (FRP) system. This study
identifies the best method of installing CFRP bars in reinforced concrete beams when same
effective area of CFRP is being used at critical section of beam. The results of experimental
investigations on RC beams strengthened in flexure using various strengthening methods
such as Near Surface Mounted (NSM) and Externally Bonded Reinforcement (EBR)
methods are discussed.
1 INTRODUCTION
Most of the existing structures are designed based on gravity load design (GLD); consequently,
they lack many important factors such as seismic loading and ductile detailing. In recent
years, strengthening technologies for reinforced concrete structures using FRP compos-
ites have been gaining widespread interest and growing acceptance in the civil engineering
industry. CFRP can be installed mainly in two different methods; Near Surface Mounted
(NSM) and Externally Bonded Reinforcement (EBR) on the element to enhance the flexu-
ral strength. Externally bonded FRP (EBR) can be acknowledged as a method of binding
a sheet or a plate with an epoxy on the outer surface of a structure. Near surface mounted
(NSM) bars has emerged as a new strengthening technique in which the external reinforce-
ment is embedded into the section in grooves with an epoxy adhesive. [4][1][2]
Even though NSM method has been identified as an effective method of installing CFRP
in qualitative terms, experimental values have not been provided yet to verify the effectiveness
of NSM method in terms of flexural strengthening when same effective area of CFRP is used
as in EBR technique.
2 FLEXURAL STRENGTHENING
21
3 EXPERIMENTAL PROGRAMME
In this study, the test program consists of two series of tests which includes casting,
instrumentation, and testing nine RC beams having rectangular cross-section. The beams
were designed as under reinforced beams to initiate failure in flexure. The 28 days average
compressive strength of the concrete was 36.2 MPa. The RC beams have length of 1000 mm
with cross section of 150 mm width, 200 mm depth and effective span of 700 mm. Each series
was composed of a Control beam (CB), and three beams for each investigated strengthening
technique. In this paper an attempt is made to evaluate the flexural performance of RC beams
using NSM and EBR techniques with same effective area of CFRP at mid span section. The
properties of CFRP bar, CFRP sheets and epoxy are presented in Table 1.
22
C/s area
Sl. Method of of CFRP
No. Beam ID strengthening Description (mm2)
23
24
Pcr Pu δu ∆Pu
Beam ID kN kN mm % Mode of failure
Pcr – first crack load; Pu – Ultimate Load; δu – Deflection at ultimate load; ∆Pu – % ultimate load increase
over control beam.
12 mm, cracking of polyester resin did not occur. Due to this, the increase in load carrying
capacity of the beam BD12G20 is 14.7% when compared with BD10G20.
From Fig. 6, it is clear that at higher loads, the deflection of the strengthened beam is decreased
compared to control beam though final deflection at ultimate load is higher in strengthened
beam. This is in agreement with lesser number of cracks and finer cracks as shown in failure
pattern in Fig. 5. At ultimate load, the deflection of the beam increased by 6.7% and 14% in case
of BD10G20 and BD12G20 respectively when compared with the control beam.
25
26
5 CONCLUSIONS
The major conclusions derived from the experimental study are as follows:
The influence of crossectional area of NSM CFRP in flexural strengthening is studied
and it is found that the beams strengthened with 10 mm and 12 mm diameter bars showed
an increase in load carrying capacity by 19.51% and 34.15% respectively compared to
unstrengthened control beam.
Beams strengthened with CFRP NSM reinforcement along with U—wrapping as end
anchorage can increase the efficiency of the strengthening technique to an extent of 6%
compared to specimens without U wrapping.
The ultimate load of the EBR specimen bonded with CFRP sheets is increased by 18%
compared to control beam. With the addition of one more layer of CFRP on the tension
side of the beam, the load carrying capacity has been increased only marginally because of
the debonding failure observed.
The flexural capacity of NSM strengthened beam with CFRP cross-section is increased
as much as 15% compared to EBR strengthened beam with similar effective crossectional
area of CFRP.
REFERENCES
Ahmed Ehsan et al. 2011. Flexural performance of CFRP strengthened RC beams with different degrees
of strengthening schemes. International Journal of the Physical Sciences 6(9): 2229–2238.
Balamuralikrishnan R. & Antony C. 2009. Flexural Behavior of RC Beams Strengthened with Carbon
Fiber Reinforced Polymer (CFRP) Fabrics. The Open Civil Engineering Journal 3: 102–109.
Barros J.A.O. and Dias S.J.E. 2006. Near surface mounted CFRP laminates for shear strengthening of
concrete beams. Cement & Concrete Composites for Construction Elsevier 28: 276–292.
Bilotta A. et al. 2011. Bond Efficiency of EBR and NSM FRP Systems for Strengthening Concrete
Members. Journal of Composites for Construction 15: 757–772.
Eliane K. et al. 2007. Flexural Strengthening of RC-T Beams with Near Surface Mounted (NSM) FRP
reinforcements. University of Patras, Patras, Greece 16–18.
George Varughese Abilash & Manikandan T. 2015. Flexural Retrofitting of RC Beams using Extra
Rebars and U-Wraps. International Journal of Science and Research 5: 2305–2309.
Khalifa M. Ahmed. 2016. Flexural performance of RC beams strengthened with near surface mounted
CFRP strips. Elsevier 55: 1497–1505.
Laraba Abdelkrim et al. 2014. Structural Performance of RC Beams Strengthened with NSM-CFRP.
Proceedings of the World Congress on Engineering 2.
Moshiur Rahman, Md. et al. 2015. Effect of adhesive replacement with cement mortar on NSM
strengthened RC Beam. University of Malaya 61–72.
Shukri A.A. et al. 2016. Behaviour of precracked RC beams strengthened using the side-NSM tech-
nique. Elsevier 123: 617–626.
Singh S.B. et al. 2014. Experimental and parametric investigation of response of NSM CFRP-
strengthened RC Beams. ASCE J. Compos. Constr. 18: 1–11.
27
A. Azam
Department of Applied Physics, Aligarh Muslim University, Aligarh, India
1 INTRODUCTION
Steel reinforced concrete is considered as an ideal composite material and thus, it has been
used extensively for the construction of all types of structures. This is mainly because of the
high compressive strength and excellent tensile strength contributed by concrete and steel
respectively. However, its structural and durability performance is affected when subjected
to aggressive environments (Shetty, 2013). The corrosion of embedded steel with subsequent
spalling and delamination of concrete cover is one of the most common causes of deterio-
ration, particularly when using admixtures containing chloride, unwashed sea sand or the
structure is exposed to seawater or de-icing salt. Hence, the choice of materials for concrete
mix, type of steel and appropriate reinforcement detailing are crucial parameters in con-
structing durable steel reinforced concrete structures in the marine environment (Neville &
Brooks, 1994).
The main chemical ingredients of seawater are chloride, sodium, magnesium, calcium and
potassium ions. Seawater contains dissolved salts of about 35,000 ppm and sodium chloride
is the main component of dissolved salts (McCoy, 1996). Seawater is alkaline in nature with a
pH value ranging between 7.5 and 8.4. The corrosion process of embedded steel in concrete
arises when the pH value becomes lower than 11. Hence, it is necessary to supply alkalinity
into steel reinforced concrete structures, particularly when the structure is exposed to an
extremely severe environment (Gani, 1997). Chloride ions, present in seawater, can enter the
concrete and accelerate the corrosion process of steel reinforcements. This is the most damag-
ing effect of seawater on steel reinforced concrete structures (Aburawi & Swamy, 2008). The
sodium and potassium ions present in seawater may increase the alkali aggregate reaction.
Magnesium and sulfate ions may deteriorate the cement paste (Uddin et al., 2004).
Various investigations have revealed the effects of saline water on the mechanical strengths
of cementitious composites and it was observed that saline water is not fit for concrete
29
2 EXPERIMENTAL INVESTIGATION
conducted at the age of 90 days in order to determine the corrosion rate of embedded steel
in concrete for all the mixes. All the specimens were cast using Ordinary Portland Cement
(OPC). The mix ratio was taken as 1:2:4. Coarse aggregates with maximum nominal size
of 12.5 mm were used for all mixes. Locally available river sand was used as fine aggregate.
The nano-SiO2,with an average particle size of 15 nm, was used as an admixture. The water/
cement ratio was taken as 0.5. The cement content used was 300 kg/m3. Potable water with
a pH value of about 7.12 and saline water (sodium chloride 3.5% by weight of water) with
a pH value of about 7.31 were used for mixing and curing. All the concrete specimens were
cast and demolded after 24 hours and were kept under specified curing water up to the testing
days. Moreover, three specimens for each condition were prepared in order to understand the
variability of the test results. Quantities of the materials used for the all concrete mixes such
as Fine Aggregate (FA), Coarse Aggregate (CA), cement and so on, are shown in Table 1.
Figure 3. Effect of nano-SiO2 on compressive Figure 4. Effect of nano-SiO2 on flexural
strength of SS mixes. strength of PP mixes.
Figure 5. Effect of nano-SiO2 on flexural Figure 6. Effect of nano-SiO2 on flexural
strength of PS mixes. strength of SS mixes.
32
90 days, higher mechanical strengths were observed for the PP mixes as compared to the SS
and PS mixes. Thus, the rate of strength gain of SS and PS mixes was faster than PP mixes at
the early ages. This showed that the saline water accelerates the hydration of cement. How-
ever, a reduction in strengths of SS and PS mixes at the later age were reported, which may
be due to the formation of salt crystallization in concrete and thus, affecting the strength.
Ecorr Icorr CR
Specimen mV mA/cm2 × 10-5 mm/year × 10-3
fills the tiny voids between the hydration products and improves the bonding capacity, lead-
ing to a compact structure with improved density and reduced porosity. Thus, the ingress of
chloride ions is significantly reduced.
4 CONCLUSIONS
The following conclusions may be drawn on the basis of the present investigations:
1. Concrete mixes prepared as well as cured in saline water have higher mechanical strengths
than concrete mixes prepared and cured in potable water at the early ages. However, the
later age strengths of concrete mixes mixed and cured in potable water were found to be
significantly higher than the concrete mixes mixed and cured in saline water. This reduc-
tion in strength is possibly due to the formation of salt crystals in concrete.
34
ACKNOWLEDGMENTS
M. Daniyal, is thankful to the Council of Scientific and Industrial Research (CSIR), Human
Resource Development Group, Government of India, for providing financial assistance in
the form of a Senior Research Fellowship (SRF).
REFERENCES
Aburawi, M. & Swamy, R.N. (2008). Influence of salt weathering on the properties of concrete. The
Arabia Journal for Science and Engineering, 33 (N 1B), 105–115.
Akinkurolere, O.O., Jiang, C. & Shobola, O.M. (2007). The influence of salt water on the compressive
strength of concrete. Journal of Engineering Applied Science, 2(2), 412–415.
Binici, H., Aksogan, O., Gorur, E.B., Kaplan, H. & Bodur, M.N. (2008). Performance of ground blast
furnace and ground basaltic pumice concrete against seawater attack. Construction and Building
Materials, 22(7), 1515–1526.
Gaitero, J.J., Campillo, I. & Guerrero, A. (2008). Reduction of the calcium leaching rate of cement paste
by addition of silica nanoparticles. Cement and Concrete Research, 38, 1112–1118.
Gani, M.S.J. (1997). Cement and concrete (First)., London, UK: Chapman & Hall.
Givi, A.N., Rashid, S.A., Aziz, F.N.A. & Salleh, M.A.M. (2010). Experimental investigation of the size
effects of SiO2 nano-particles on the mechanical properties of binary blended concrete. Composites:
B, 41, 673–677.
IS: 516-1959, Indian Standard for methods of tests for strength of concrete, Reaffirmed 1999.
IS:5816-1999, Indian Standard for splitting tensile strength of concrete-method of test, Reaffirmed 2004.
Jo, B.W., Kim, C.H., Tae, G.H. & Park, J.B. (2007). Characteristics of cement mortar with nano-SiO2
particles.Construction and Building Materials, 21, 1351–1355.
Kawashima, S., Hou, P., Corr, D.J. & Shah, S.P. (2013). Modification of cement-based materials with
nanoparticles. Cement and Concrete Composites, 36, 8–15.
Li, G. (2004). Properties of high-volume fly ash concrete incorporating nano-SiO2. Cement and Concrete
Research, 34, 1043–1049.
Ltifi, M., Guefrech, A., Mounanga, P. & Khelidj, A. (2011). Experimental study of the effect of addition
of nanosilica on the behaviour of cement mortars. Procedia Engineering, 10, 900–905.
McCoy, W.J. (1996). Mixing and curing water for concrete. Significance of tests and properties of
concrete and concrete making materials, STP 169-A. Philadelphia, PA: American Society for Testing
and Materials, 515–521.
35
36
ABSTRACT: In India most structures are designed for gravity loading as per IS 456:2000.
These structures are susceptible for damage during earthquakes. During a severe earthquake,
the structure is likely to undergo inelastic deformation and has to depend on ductility and
energy absorption capacity to avoid collapse. Such buildings designed for gravity loading
need to be strengthened to increase strength, stiffness and ductility. Some recently developed
Fiber-Reinforced Polymer (FRP) techniques can play a vital role in structural repairs, seismic
strengthening and retrofitting of existing buildings, whether damaged or undamaged.
Jacketing is the most popularly used method for the strengthening of building structural
elements. This paper presents a review of FRP jacketing on different structural elements of
a building.
1 INTRODUCTION
37
2.2 Carbon-fiber
Carbon-fibers are created when Polyacrylonitrile (PAN)-fibres pitch resins, or Rayon are
carbonized (through oxidation and thermal pyrolysis) at high temperatures. Through further
processes of graphitizing or stretching of the fibers, strength or elasticity can be enhanced
respectively. Carbon-fibers are manufactured in diameters analogous to glass-fibers with
diameters ranging from 4 to 17 µm.
2.3 Aramid-fiber
Aramid-fibres are most commonly known as Kevlar, Nomex and Technora. Aramids are
generally prepared by the reaction between an amine group and a carboxylic acid halide
group (aramid), and commonly this occurs when an aromatic polyamide is spun from a
liquid concentration of sulfuric acid into a crystallized fiber. Fibers are then spun into larger
threads in order to weave into large ropes or woven fabrics.
3 LITERATURE REVIEW
Arduini and Nanni (1997) studied pre-cracked, RC beam specimens strengthened by Carbon
Fiber-Reinforced Polymer (CFRP) which were analyzed for different parameters including
two CFRP material systems, two concrete surface preparations, two RC cross sections, and
number and location of CFRP plies. It was observed that the effect of CFRP strengthening
was considerable, but the effect of some of the tested variables was modest. An analytical
model created could simulate the load-deflection behavior as well as the failure mode of
the pre-cracked RC specimens. Different failure mechanisms from ductile to brittle were
simulated and verified.
Norris et al. (1997) experimentally and analytically studied the behavior of damaged or
under strength concrete beams retrofitted with thin CFRP sheets, epoxy bonded to enhance
their flexural and shear strengths. The effect of CFRP sheets on strength and stiffness of the
beams for various orientations of the fibers, with respect to the axis of the beam, was also
considered. It was observed that the magnitude of the increase in strength and stiffness, and
mode of failure were related to the direction of reinforcing fibers.
GangaRao and Vijay (1995) experimentally investigated RC beams strengthened with car-
bon fiber wraps to evaluate the enhancement in flexural strength. This was compared to
identical concrete beams strengthened with bonded steel plates. Parameters like strength,
stiffness, compositeness between wrap and concrete, and associated failure modes, were also
evaluated.
Papakonstantinou et al. (2001) experimentally studied the effects of GFRP composite
rehabilitation systems on the fatigue performance of RC beams. The results indicated that
the fatigue life of RC beams with the given geometry and subjected to the same cyclic load
could be significantly increased through the use of externally bonded GFRP composite
sheets. It was found that the beams failed primarily due to fatigue of the steel reinforcement.
Debonding of the GFRP composite sheet was a secondary mechanism in the strengthened
beams.
Kumutha et al. (2007) experimentally investigated the behavior of axially loaded rectangular
columns that had been strengthened with a GFRP wrap. Specimens with different aspect
ratios wrapped with zero, one, and two layers of GFRP were investigated. The test results
showed a linear relationship existed between the strength of confined concrete and lateral
confining pressure provided by the FRP.
38
39
4 CONCLUSIONS
FRP is used most commonly in the repair and reconstruction of damaged or deteriorating
structures. Seismic protection and changes in codal provisions has established the necessity
for strengthening thereby enhancing the performance of structural elements of a building.
Speed of construction, lightness of weight, resistance to corrosion and high strength has
proved FRP to be an effective material for jacketing. But most works are carried out on
columns with small sizes, plain concrete. More investigations are to be carried out on large
reinforced sections. The impact of seismic behavior on damaged structural elements that have
been jacketed partially or fully, are also to be investigated.
40
Al-Salloum Y.A. (2007). Influence of edge sharpness on the strength of square concrete columns con-
fined with FRP composite laminates. Composites, 38, (Pt B), 640–650.
Alsayed S.H., Almusallam T.H., Ibrahim S.M., Al-Hazmi N.M., Al-Salloum Y.A. & Abbas H. (2013).
Experimental and numerical investigation for compression response of CFRP strengthened shape
modified wall-like RC column. Construction and Building Materials, 63, 72–80.
Arduini, M. & Nanni, A. (1997). Behavior of pre-cracked RC beams strengthened with carbon FRP
sheets. Journal of Composites for Construction, 1(2), 63–70.
Banjara, N.K. & Ramanjaneyulu, K. (2017). Experimental and numerical investigations on the
performance evaluation of shear deficient and GFRP strengthened reinforced concrete beams.
Construction and Building Materials, 137, 520–534.
GangaRao, H.V.S. & Vijay, P.V. (1995). Bending behavior of concrete beams wrapped with carbon
fabric. Journal of Structural Engineering, 124(1), 3–10.
Hay, A.S.A. (2014). Partial strengthening of R.C square columns using CFRP. Housing and Building
National Research Center Journal.
Kumutha R., Vaidyanathan R. & Palanichamy M.S. (2007). Behaviour of reinforced concrete rectangular
columns strengthened using GFRP. Cement & Concrete Composites, 29, 609–615.
Norris, T., Saadatmanesh, H. & Ehsani, M.R. (1997). Shear and flexural strengthening of RC beams
with carbon fiber sheets. Journal of Structural Engineering. 903–911.
Papakonstantinou, C.G., Petrou, M.F. & Harries, K.A. (2001). Fatigue behavior of RC beams
strengthened with GFRP sheets. Journal of Composites for Construction, 5(4), 246–253.
Parikh, K. & Modhera, C.D. (2012). Application of GFRP on preloaded retrofitted beam for
enhancement in flexural strength. International Journal of Civil and Structural Engineering, 2(4),
1070–1080.
Ranolia, K.V., Thakkarb, B.K. & Rathodb, J.D. (2012). Effect of different patterns and cracking in FRP
wrapping on compressive strength of confined concrete. Procedia Engineering, 51, 169–175.
Raval, R.P. & Dave, U.V. (2013). Behaviour of GFRP wrapped RC columns of different shapes. Proce-
dia Engineering, 51, 240–249.
Sharma, S.S., Dave, U.V. & Solanki, H. (2013). FRP wrapping for RC columns with varying corner
radii. Proceedia Engineering, 51, 220–229.
Silva, M.A.G. (2011). Behaviour of square and circular columns strengthened with aramidic or carbon
fibers. Construction and Building Materials, 25, 3222–3228.
Singh, V., Bansal, P.P., Kumar, M. & Kaushik, S.K. (2014). Experimental studies on strength and
ductility of CFRP jacketed reinforced concrete beam-column joints. Construction and Building Mate-
rials, 55, 194–201.
Tahsiri, H., Sedehi, H.T.O., Khaloo, A. & Raisi, E.M. (2015). Experimental study of RC jacketed and
CFRP strengthened RC beams. Construction and Building Materials, 95, 476–485.
Wang, L.M. & Wu, Y.F. (2008). Effect of corner radius on the performance of CFRP confined square
concrete columns test. Engineering Structures, 30, 493–505.
Wang, Y.C. Hsu, K. (2008). Design of FRP-wrapped reinforced concrete columns for enhancing axial
load carrying capacity. Composite structures, 82, 132–139.
Widiarsa, I.B.R. & Hadi, M.N.S. (2013). Performance of CFRP wrapped square reinforced concrete
columns subjected to eccentric loading. Proceedia Engineering, 54, 365–376.
Wu, Y.F. & Wei, Y.Y. (2010). Effect of cross-sectional aspect ratio on the strength of CFRP confined
rectangular concrete columns. Engineering Structures, 32, 32–45.
Yalcin, C. Kaya, O. & Sinangil, M. (2008). Seismic retrofitting of RC columns having plain rebars using
CFRP sheets for improved strength and ductility. Construction and Building Materials, 22, 295–307.
Yaqub, M. & Bailey, C.G. (2011). Repair of fire damaged circular reinforced concrete columns with
FRP composite. Construction and Building Materials, 25, 359–370.
41
Liya Jerard
Government Engineering College, Thrissur, Kerala, India
S. Arun
Department of Civil Engineering, Government Engineering College, Thrissur, Kerala, India
1 INTRODUCTION
Dynamic soil properties govern the behavior of soil subjected to dynamic loads. Dynamic
soil parameters can be classified into low strain properties (dynamic Young’s modulus and
shear modulus, Poisson’s ratio, soil damping etc.) and properties mobilized at larger strains.
Estimation of dynamic soil parameters finds major applications in the estimation of response
of machine foundations, dynamic bearing capacity of soil, and soil-structure interaction
effects during propagation of stress waves generated during earthquakes.
For the estimation of in-situ dynamic soil properties, various tests such as vertical block
resonance test, cyclic plate load test, and wave propagation tests are currently used (IS 5249,
1992). Such methods are laborious and require expensive instrumentation.
To simplify the dynamic soil property estimation procedure, researchers in the past have
tried an inverse problem solution approach. Ikemoto et al. (2000) used measured ground
motions during earthquakes for the estimation of dynamic soil parameters. The methodol-
ogy used a combination of the S-wave multiple refection theory and an optimal search tool
based on genetic algorithms. Glaser and Baise (2000) used the time series analysis technique,
the Auto Regressive Moving Average (ARMA) model for the identification of soil properties
from ground motion history. Garsia and Romo (2004) used a neural network approach for
identifying dynamic soil properties from earthquake records.
A time domain approach, involving the use of an extended Kalman filter for identification
of soil stiffness and damping, was proposed by Mikami and Sawada (2004). The methodology
was successfully demonstrated over a soil-structure system modeled using three degrees of
freedom.
This paper presents a novel attempt for in-situ identification of dynamic soil stiffness and
damping parameters. The methodology involves use of measured dynamic responses from
the foundation, numerical models for simulating the dynamic response of a soil-foundation
system and an optimal search tool for the identification of soil parameters. The methodology
is demonstrated through numerical studies on the model of a 16 T impact hammer at the
forging facilities of Steel Industries Forging Limited (SIFL) in Athani, Thrissur.
43
The main objective of the work was to explore the potential of identifying the dynamic soil
properties from the measured time history of dynamic responses of the foundation. For this,
the numerical model of a coupled soil-foundation system was utilized. The parameters of the
numerical model were treated as unknown variables and the optimal value of the same were
searched for the condition that the error between the measured foundation response history
and simulated response history from the numerical model was a minimum.
The numerical evaluation of dynamic response of the soil-foundation system and optimal
search algorithms formed the major computational part of the methodology. For this study,
Newmark’s time stepping method (Chopra, 2015) was employed for the numerical integra-
tion of the coupled equations of motion of the soil-foundation model. Newmark’s average
acceleration scheme, which has unconditional numerical stability, was utilized.
The soil dynamic parameters, the dynamic soil stiffness (which is a function of dynamic
modulus of elasticity) and soil damping ratio, was kept as the unknown parameters to be
identified. The optimal values of these parameters were searched from a set representing the
practical range of these parameters.
Conventional methods of soil parameter identification involve the use of gradient based
search algorithms, but such methods suffer from the problem of converging to a local min-
imum. Hence for the present study, genetic algorithms were used for the optimal search.
Genetic algorithms are a computerized search and optimization algorithms are based on the
mechanics of natural genetics and natural selection. They are good at taking larger, poten-
tially huge, search spaces and navigating them looking for optimal combinations of things
and solutions (Rajasekaran and Vijayalekshmi, 2003).
Genetic algorithms proceed to the global optimum through the following major steps:
1. Generation of random population. Solving a problem means looking for a best solution;
the space where all possible solutions may be present is called a ‘search space’. For this
study, the search space was generated by specifying lower and upper limits of unknown
variables (stiffness of soil or damping of soil), with a uniform distribution between the
limits. A random population was initially generated from the defined search space.
2. Evaluation of fitness for each organism. As per Darwin’s evolution theory of “survival
of the fittest”, the best ones should survive and create new offspring. The fitness of each
organism (i.e., each individual of the initial random population) needs to be evaluated.
A fitness function derived from the objective function was used for this purpose.
Genetic algorithms are suitable for maximization problems. Hence minimization
problems need to be converted to maximization problems by some suitable transforma-
tion (Rajasekaran and Vijayalekshmi, 2003). Generally, this transformation is achieved
through a suitable inversion of the objective function. Alternatively, an exponential map-
ping procedure was found to be more effective (Gregory, 1991) and was utilized in this
study. The exponential mapping for a minimization problem used is defined by:
where β is the Selection Pressure, defined as the ratio between the probability that the
most fit member of the population is selected as a parent to the probability that an average
member is selected as a parent. Its value is taken as 8.
Objs is the Objective Function Value of Organism (sample from a selected random
population) whose fitness is evaluated. Worst Objective Function Value (worstObj) is, the
organism with maximum error in the initial random population.
3. Selection of models into new population. Samples are selected from the initial random
population to cross over and produce the best offspring based on its fitness value. Selec-
tion is completed by any of these three procedures: roulette wheel selection; tourna-
ment selection; or random selection. Roulette wheel selection procedure was used in this
study.
44
1 n
∑ (X i − xi )
2
MSE = (2)
n 1
where X is the Measured Response, x is the Simulated Response from the numerical model
and n is Overall Number of Time Steps used in Analysis or Sampling Interval of the
Measured Response.
2. Absolute difference of L2 norms of data are:
The first norm is more stringent as it estimates the error point wise, rather than the second
norm, which estimates the error associated with data in a mean sense. The optimization
procedure involved finding the global minimum of this error norm for the practical range
of soil parameters to be identified. The overall methodology used is presented in the form
of a flowchart in Figure 1.
A MATLAB script based on the above flowchart was developed for performing numerical
studies.
45
The methodology was demonstrated for the case of the identification of dynamic soil para
meters for a 16 T impact hammer foundation located in forging facilities of Steel Industrial
Forging Limited (SIFL) in Athani, Thrissur. An impact hammer foundation is a structure
used to receive and transfer both static and dynamic loads imposed during the operation of
the machine (Figure 2).
The following details relate to the existing SIFL 16 T hammer foundation:
1. Anvil mass = 325 T.
2. Foundation mass = 1225.32 T (based on the foundation drawings).
3. Stiffness of neoprene pad between anvil mass and foundation = 9 × 106 kN/m (based on
the elastic modulus of the material used as specified in the drawing and the dimensions of
the pad).
4. Stiffness of soil at foundation level = 3.93 × 106 kN/m (based on the bearing capacity of
soil at the foundation level of 8 m).
The soil-foundation system was modeled as a two degree of freedom system and the equa-
tion of motion for the same was developed as:
M1 is Mass of the Anvil, K1 is Stiffness of the Elastic Pad, M2 is Mass of the Foundation
Block, K2 is Soil Stiffness, C1 is Damping (Rajasekaran and Vijayalekshmi, 2003). Associated
with Elastic Pad, and C2 is Soil Damping.
The right hand side of Equation 2 was zero as the problem was of initial value type, with
the initial velocity of impact being computed from the energy transferred and fall of impact
hammer (V = 6 m/s).
The soil stiffness, K2, can be related to the Coefficient of Uniform Compression (Cu) and
Area of Foundation Block (Ab) through the relation (IS 5249: 1992).
K 2 = Cu Ab (5)
The soil modulus of elasticity and coefficient of uniform compression holds the following
relationship (IS 5249: 1992):
46
where Cu is the coefficient of uniform compression in kN/m3; µ the Poisson’s Ratio of Soil
and E is the Soil Elastic Modulus.
Finally, the Soil Damping Ratio (ζ) can be related to Damping Coefficient (C2) through
the foundation mass and natural frequency of vertical vibrations.
Since the objective of the present study was to understand the efficacy of the proposed
identification scheme, observation records of foundation response were synthetically gen-
erated by numerically computing the response of the soil-foundation system using known
values of soil parameters to be identified (soil stiffness and damping).
Figure 3. Comparison of synthetic response and converged response from simulations (using identi-
fied parameter).
47
This study explored a strategy for the estimation of in-situ dynamic soil properties from
dynamic response measurements on foundation structure rather than direct measurements
of soil response. The methodology combined a numerical time integration procedure (for the
solution of coupled equations of motion involving soil mass and foundation structure), and
an optimization algorithm (for identification of the optimal value of soil parameters) within
a system identification framework. The efficacy of the methodology was demonstrated
through case studies on impact hammer foundation. The following conclusions were derived
from the present study:
Dynamic soil parameters can be estimated by exploiting the potential soil-structure inter-
action using system identification techniques.
Newmark’s method and genetic algorithm proves to be an excellent numerical tool in this
venture.
Mean square error was found to be a better estimate for defining the objective function in
the optimization scheme for the identification of soil parameters.
The proposed methodology is able to identify the dynamic soil parameters of soil stiffness
(elastic modulus) and soil damping with 2% error.
48
ACKNOWLEDGEMENTS
This work was carried out as part of an undergraduate project for Liya Jerard. The manage-
ment of SIFL is thanked for giving access to their forging facilities in Athani, Thrissur. The
contribution of other batch mates’ (Panchami, Sameera, Sabareesh and Varghese) is also
acknowledged.
REFERENCES
Barkan, D.D. (1962). Dynamics of bases and foundations. New York, NY: McGraw-Hill.
Bureau of Indian Standards. (1980). Code of practice for design and construction of machine foundations
IS 2974 Part II: 1980). New Delhi, India: Author.
Bureau of Indian Standards. (1992). Determination of dynamic properties of soil—Method of test IS
5249: 1992. New Delhi, India.
Chopra, A.K. (2015). Dynamics of structures (3rd ed.). New Delhi, India: Pearson.
Garsia, S.R. & Romo, M.P. (2004). Dynamic soil property identification using earthquake records:
A neural network approximation. In, Proceedings of the 13th World Conference on Earthquake
Engineering. Vancouver, Canada.
Glaser, S.D. & Baise, L.G. (2000). System identification estimation of soil properties at Lotung site. Soil
Dynamics and Earthquake Engineering, 19, 521–531.
Gregory, J.E.R. (1991). Foundations of genetic algorithms. San Mateo, California: Morgan Kaufmann
Publishers.
Ikemoto, T., Miyojima, M. & Kitaura, M. (2000). Inverse analysis of soil parameters using accelera-
tion records. In Proceedings of the 12th World Conference on Earthquake Engineering. Auckland,
New Zealand.
Mikami, A. & Sawada, T. (2004, August). Time-domain identification system of dynamic soil-structure
interaction, Proceedings of the 13th World Conference on Earthquake Engineering. Vancouver,
Canada.
Rajasekaran, S. & Vijayalakshmi Pai, G.A. (2003). Neural networks, fuzzy logic and genetic algorithm.
New Delhi, PHI Learning Pvt. Limited.
49
ABSTRACT: Glass Fibre Reinforced Gypsum (GFRG) panels have been in use in our
country since a decade. These panels, with the advantages of rapidity, sustainability and
affordability, are best suited to meet the mass housing requirements of India. However, the
present design of GFRG buildings needs the walls to start from the foundation itself as the
walls are load bearing. This denies the provision of ground storey parking which is a feature
of high demand in multi storey constructions especially in urban areas, where the land is
scarce. The proposed solution to cater this issue is to raise the GFRG building system over a
frame structure in ground storey comprising of Reinforced Concrete (RC) columns, beams
and slabs. The present study evaluates the behaviour of GFRG Open Ground Storey (OGS)
structure subjected to gravity and lateral loads using pushover analysis.
1 INTRODUCTION
1.1 General
Glass Fibre Reinforced Gypsum (GFRG) panels have been used for the construction of
GFRG buildings in India for more than a decade. They are suitable for rapid affordable
mass-scale building construction. These are load bearing hollow panels manufactured by
means of a special calcination process converting raw gypsum, obtained as waste product
from the fertilizer industries into calcined gypsum plaster, which is then reinforced with glass
fibres along with the addition of special additives. The addition of 300–350 mm long glass
fibres, randomly spread, contributes tensile strength to the brittle gypsum plaster, and the
inclusion of additives offers moisture resistance and load-bearing capability. Figure 1 shows
the typical cross section of the panel.
GFRG panel, as a structural member, is capable of resisting axial load, lateral shear &
in-plane bending and out-of-plane bending. In Australia, where the panels were invented
in 1990, these were used as load bearing walls in low-rise as well as multi-storey build-
ings (up to 9 stories). Subsequently, the technology was adopted in various other countries
such as China and India, based on research works carried out in these countries. The stud-
ies in India carried out at the Structural Engineering Research Centre (SERC), Chennai
and Indian Institute of Technology Madras (IITM) helped in gaining acceptability in the
51
country. Studies at IITM helped in extending the use of the panels for the construction of
slabs and stair cases as well as shear walls capable of resisting seismic loads. Infilling of
concrete inside the cavities of the panel enhances the axial load bearing capacity more than
9 times, and enhances the lateral load carrying capacity tremendously with the help of rein-
forcement. Hence, for all major structural applications, these panels are used in combination
with reinforced concrete. Construction using GFRG panels offers advantages of afford-
ability, faster construction, eco-friendliness, increased carpet area, reduction in weight of
structure, etc. The GFRG buildings constructed so far have the walls starting from the foun-
dation itself.
The current demand from the housing sector for the urban community in India is the pro-
vision of parking space for vehicles at the ground or basement storey of the building, referred
to as open ground storey (OGS) building system.
OGS for conventional multi-storeyed framed structure demands either a dynamic analysis
of the building or an enhancement factor (varies depending on different codes) for the forces
(obtained from analysis) in the ground storey columns and beams, in order to avoid the for-
mation of plastic hinges in the columns, which will result in sudden collapse of the structure
(Fig. 2). Recently, a recommendation of minimum wall plan density was also introduced in
IS 1893 Part 1:2016.
2 DESIGN PHILOSOPHY
The basic design philosophy of the proposed GFRG building system is to avoid the collapse
by storey mechanism due to the formation of plastic hinges in the ground storey. In order to
satisfy this, the columns and beam in the ground storey shall be designed in such a way that
the yielding of frame members will not precede the attainment of peak load by the GFRG
wall in the upper storey. The dissipation of energy will be taken care of by the formation of
vertical cracks in GFRG.
3 MODELLING
The modelling was done in SAP2000. The GFRG wall was modelled as a series of nonlinear
layered shell elements connected with link elements (representing GFRG rib between the
concrete cores). The shell element constituted two layers of GFRG (top and bottom flange),
one layer of M20 concrete (infill inside the cavities) and a reinforcement layer oriented in
90° (12 mm longitudinal reinforcement inside the cavities). Figure 3 shows the details of the
SAP2000 model of a GFRG wall.
The modelling of GFRG wall was validated with the available experimental results (Philip,
2017) as shown in Figure 4. The GFRG wall was connected to an RC plinth beam at the bot-
tom by means of starter bars. The plinth beam was locked on to the strong floor of the labo-
ratory to arrest any sliding or uplift of the beam. Hence the wall was modelled with fixity at
one end. The experimental specimen was subjected to quasi static lateral load at the top level.
The performance of GFRG-OGS building system was compared with that of a conven-
tional RC—OGS framed building in terms of base shear drift demand. The analysis was
54
Figure 5. Cross section details of ground storey frame and GFRG walls.
done for a representative 4 storey 5 bay frame assuming a symmetric plan for the building.
A bay width of 3 m and a storey height of 3 m were considered in this study. Figure 5 shows
the details of cross section of columns and beams in the ground storey and the cross section
of the GFRG panel in the upper storeys.
The sections were chosen based on a study conducted in RC—OGS buildings (Davis,
2009). Hinge properties developed from the moment—rotation curves of these sections were
also assigned to the frame elements. A nonlinear static displacement controlled analysis was
carried for the chosen frame. A uniform lateral load distribution was chosen for the push-
over analysis according to the FEMA 356 recommendation for vertically irregular struc-
tures. Hence a unit load was applied at the top level of each storey with a specified target
displacement.
The arch action in the transfer of gravity load, as stated in the literature, was observed
in the analysis. Figure 6 shows the principal stress distribution in the walls under gravity
load.
It was observed that no hinges were formed in the ground storey frame till a drift level of
0.5%. For the RC—OGS frames in the literature, even with a drift ratio of 0.11%, plastic
hinges were formed, though they were within life safety performance level. This indicates
that the demand in the columns and beams in the GFRG-OGS building system is much
lesser compared to the conventional OGS structures. Figure 7 shows the pushover curve for
GFRG-OGS specimen obtained from SAP2000.
Figure 8 shows the hinges formed in the ground storey frame at a higher drift ratio of 1.6%.
55
56
The behaviour of GFRG-OGS building systems are different compared to RC framed OGS
building systems. The performance was characterized by the arch mechanism of gravity
load transfer which resulted in lesser force demand in the ground storey beams. It also
exhibited better performance in terms of delayed formation of plastic hinges in the ground
storey columns. The larger values of drift ratios obtained with the adopted dimensions also
indicate the possibility of reduced member cross sections compared to RC framed OGS
buildings.
ACKNOWLEDGEMENTS
The authors would like to thank Indo-French Centre for the Promotion of Advanced
Research (IFCPAR), India, for their continued support throughout this research work.
REFERENCES
Arlekar J.N., Jain S.K. and Murty C.V.R. 1997, Seismic response of RC frame building with soft storeys,
Proceedings of the CBRI golden jubilee conference on natural hazards in urban habitat, November,
New Delhi: 3–24.
ASCE 7 2005, Minimum design loads for buildings and other structures, American Society of Civil
Engineers, USA.
Burhouse, P. 1969. Composite action between brick panel walls and their supporting beams, Proc. Instn.
civ. Engrs, v. 43, p. 175–94.
Coull A. 1966, Composite action of walls supported on beams, Build. Sci., Vol. 1: 259–270.
FEMA 356 2000, NEHRP recommended provisions for seismic regulations for new buildings and other
structures. Federal Emergency Management Agency, Washington DC, USA.
Green D.R. 1972, Interaction of solid shear walls and their supporting structures, Build. Sci., Vol. 7:
239–248.
IS 1893 (Part 1): 2002, Indian standard criteria for earthquake resistant design of structures—General
provisions and buildings, Bureau of Indian Standards, New Delhi.
IS 1893 (Part 1): 2016, Indian standard criteria for earthquake resistant design of structures—General
provisions and buildings, Bureau of Indian Standards, New Delhi.
Janardhana, M. 2010, Studies on the behavior of glass fiber reinforced gypsum wall panels, PhD Thesis,
Indian Institute of Technology Madras, India.
Jiang X. and Gu Y. 2007, Cyclic behaviour of fibre-reinforced plaster board with core concrete composite
shear walls, Proceedings of 9th Canadian Conference on earthquake engineering, September 26–29,
2007, Ottawa, Ontario, Canada, pp. 1234–1242.
Jiji Anna Varughese 2013, Displacement-based seismic design of RC frame buildings with vertical
irregularities, PhD Thesis, Indian Institute of Technology Madras, India.
Kaushik H.B. 2006, Evaluation of strengthening options for masonry-infilled RC frames with open first
storey, Ph.D. Thesis, Indian Institute of Technology Kanpur, India.
Kuang J.S. and Shubin Li. 2005, Interaction-based design formulas for transfer beams: box foundation
analogy, Practice Periodical on Structural Design and Construction, Vol. 10(2): 127–132.
Liu K., Wu Y.F., and Jiang X.L. 2008, Shear strength of concrete filled glass fiber reinforced gypsum
walls, Materials and Structures, 41(4): 649–662.
Muthumani, K., Lakshmanan N., Gopalakrishnan S., Krishnamurthy T.S., Sivarama Sarma B.,
Balasubramanian K., Gopalakrishnan N., Sathish Kumar K., Bharat Kumar B.H., Sreekala R. and
Avinash S. 2002, Investigation on the behaviour of Gypcrete panels and blocks under static loading,
A report prepared by Structural Engineering Research Center, for M/S Gypcrete Building India (P)
Ltd., Chennai, India: 1–19.
Philip Cherian 2017, Performance evaluation of GFRG building systems, PhD Thesis, Indian Institute
of Technology Madras, India.
Robin Davis P. 2009, Earthquake resistant design of open ground storey RC framed buildings. Ph.D.
Thesis, Indian Institute of Technology Madras, India.
SAP 2000 NL Manual, Integrated software for analysis and design, computers and structures, Inc.,
Berkeley, California, USA.
57
58
1 INTRODUCTION
59
2.1 Background
Impact damage resistance of laminated composites assumes significance in the wake of
extensive use of such composites especially in bodies of automobiles, fuselage and wing skin
of aircrafts, skin of satellite modules, the application of which were made possible by the
very high stiffness and strength properties of fibre-reinforced plastics when compared to
other engineering materials for the same weight.
The use of composite materials in the aircraft industry have become so extensive that it now
constitutes a major portion of its structural weight. However, the considerable advantage of
weight savings offered by the use of fibre-reinforced polymers are restricted by the conserva-
tive design philosophy still followed for a safe design of costly fibre-reinforced polymers and
can be attributed to the underestimated design strength of these composite materials mainly
based on the concern of the extent of influence of low-velocity impact on the strength, stiff-
ness and damage tolerance of composite laminates. Out-of-plane impact by foreign objects,
runway debris and dropped tools, is expected to occur during the operation, manufacturing,
maintenance and service of composite components (Farooq, 2014). This type of impact usu-
ally leaves damage that is hardly detectable by visual inspection; referred to as barely visible
impact damage (BVID) which significantly reduce the structural performance of composite
laminates under service loads.
Generally, impacts are categorized into either low or high velocity impact but a clear cut
differentiation of a low velocity impact from a high velocity impact event in terms of the
60
Figure 1. Support fixture for drop-weight test on composite laminates (Source: ASTM D7136).
61
In the surveyed literatures, the finite element analysis of the composite laminates were con-
ducted using the commercial code ABAQUS/Explicit. The standard code specifying the
standards and specifications for low velocity impact testing on polymer matrix composites is
ASTM D7136/7136M-12.
4.1 Modelling
Initially, researchers made use of a spring-mass model to analyse the laminate specimens
thereby reducing the degrees of freedom to a finite number making analysis easier. However,
it was found out in the later studies that such an approximation could not account for the
vibration of the specimens resulting during the post-impact stages. Hence, the Hertzian
classical model for contact between an elastic sphere and its half-space was thereafter applied
to study the impact response of composite materials. Oguibe and Webb (1999) noted that
modelling based on the Hertzian contact law is not adequate in describing the contact
behaviour of laminated composite plates on account of its anisotropic nature and high
invariability in properties. The after-math of all these studies caused analysts to resort to
62
6 CONCLUSION
The main drawback of this area of research is that the laminate failure itself is difficult to
predict due to the highly heterogeneous nature of these materials. Impact causes stiffness
degradation of the laminate specimens. The resulting specimen may or may not have a high
damage tolerance depending on the extent of damage the spontaneous impact might have
caused. This invariability in strength and stiffness after impact damage makes analysis of
composite laminate specimens even more difficult.
Laminated composites may be required to perform their expected functions in environ-
ments where they may be exposed to low energy impact damage from tool dropping, collisions
64
REFERENCES
Amal A.M. Badawy. (2012). “Impact behavior of glass fibers reinforced composite laminates at different
temperatures.” Ain Shams Engineering Journal, Vol. 3, pp. 105–111.
Ana M. Amaro, Paulo Nobre Balbis Reis, Marcelo de Moura and Jaime B. Santos. (2012). “Influence
of the specimen thickness on low velocity impact behavior of composites.” Journal of Polymer
Engineering, DOI: 10.1515, pp. 53–58.
ASTM D7136/D7136M-12- “Standard Test Method for Measuring the Damage Resistance of a Fiber-
Reinforced Polymer Matrix Composite to a Drop-Weight Impact Event.”
Belingardi, G., and Vadori, R. (2001). “Low velocity impact tests of laminate glass-fiber-epoxy matrix
composite material plates.” International Journal of Impact Engineering, Vol. 27, pp. 213–229.
Cantwell, W.J., and Morton, J. (1991). “The impact resistance of composite materials—a review.”
Composites, Vol. 22(5), pp. 347–362.
Farooq, U. (2014). “Finite Element Simulation of Flat Nose Low velocity impact behaviour of Carbon
fibre composite laminates.” Unpublished Ph.D. Thesis, School of Engineering and Technology,
University of Bolton, USA.
Khalili, S.M.R., Soroush, M., Davar, A., and Rahmani, O. (2001). “Finite element modeling of low-
velocity impact on laminated composite plates and cylindrical shells.” Composite Structures, Vol. 93,
pp. 1363–1375.
Kwang-Hee Im, Cheon-Seok Cha, Jae-Woung Park, Yong-Hun Cha, and In-Young YangJong-
An Jung. (2000). “Effect of temperatures on impact damage and residual strength of CFRP
composite laminates.” AIP Conference Proceedings, DOI: 10.1063/1.1306184, pp. 1247–1254.
Malhotra, A., and Guild, F.J. (2014). “Impact Damage to Composite Laminates: Effect of Impact
Location.” Appl Compos Mater, Vol. 21, pp. 165–177.
Malhotra, A., Guild, F.J., and Pavier, M.J. (2008). “Edge impact to composite laminates: experiments
and simulations.” Journal of Material Science, Vol. 43, pp. 6661–6667.
Mili, F., and Necib, B. (2001). “Impact behavior of cross-ply laminated composite plates under
low velocities.” Composite Structures, Vol. 51, pp. 237–244.
Nikfar, B., and Njuguna, J. (2014). “Compression-after-impact (CAI) performance of epoxy carbon
fibre-reinforced nanocomposites using nanosilica and rubber particle enhancement.” IOP Conference
Series: Materials Science and Engineering, Vol. 64, DOI: 10.1088/012009.
Oguibe, C.N., and Webb, D.C. (1999). “Finite-element modelling of the impact response of a laminated
composite plate.” Composites Science and Technology, Vol. 59, pp. 1913–1922.
Park, H., Kong, C., Lim, S., and Lee, K. (2011). “A study on impact damage analysis and test of
composite laminates for aircraft repairable design.” 18th International Conference on Composite
materials. August 21–26, 2011, Jeju, Korea.
Pritchard, J.C., and Hogg, P.J. (1990). “The role of impact damage in post-impact compression.”
Composites, Vol. 21(6), pp. 503–511.
Richardson, M.O.W., and Wisheart, M.J. (1996). “Review of low-velocity impact properties of
composite materials.” Composites, Vol. 27 A, pp. 1123–1131.
Shyr, T.W., and Pan, Y.H. (2003). “Impact resistance and damage characteristics of composite
laminates.” Composite Structures, Vol. 62, pp. 193–203.
Zeng, S. (2014). “Characterisation of Low Velocity Impact Response in Composite Laminates.”
Unpublished Ph.D. Thesis, School of Engineering and Technology, University of Hertfordshire, UK.
65
ABSTRACT: A composite is a material made from two or more constituent materials with
significantly different properties, but when combined, produce a material with characteristics
different from the individual components. These materials are stronger, lighter or economical.
Composites are generally used for buildings, bridges and structures, such as boat hulls,
swimming pool panels, race car bodies, shower stalls, bathtubs, storage tanks, cultured
marble sinks and countertops. Composite laminates are an assembly of layers of fibrous
composite materials which can be joined to provide required engineering properties including
in-plane stiffness, bending stiffness, strength and coefficient of thermal expansion. Layers
of different materials may be used, resulting in a hybrid laminate. The individual layers are
generally orthotropic or transversely isotropic. The theories used for explaining the behavior
of isotropic materials cannot be applied for laminated composites. With the laminate exhibit-
ing anisotropic or quasi-isotropic properties, various other theories were applied in order to
reveal their behavior. In this paper, application of laminated composites in the engineering
field, and various theories used for explaining its behavior, are discussed.
1 INTRODUCTION
The idea of combining two or more different materials resulting in a new material with
improved properties has existed for some time. The use of light weight material for different
applications has fascinated mankind. It was discovered that, composite materials have advan-
tages and higher performance compared to each of the individual materials it contain.
Today, composites are being developed that can be cured at low pressures and temperatures,
typically 1 atm and as low as 30°C; An example is the flexible heating jackets suitable for
all plastic, metaland fiber containers. Together with lower cost and easier processing, this
technology is now becoming a practical proposition for consideration by the building industry.
Various two-dimensional plate theories have been developed for modeling laminated
composites and have been broadly classified as an equivalent single layer, layer-wise and
zig-zag theories. The Equivalent Single Layer (ESL) theories are derived by making suitable
assumptions concerning the kinematics of deformation or the stress state through the
thickness of the laminate. The simplest ESL laminated plate theory is the Classical Laminated
Plate Theory (CLPT). The next theory of ESL laminated plate theories is the First-Order
Shear Deformation Theory which extends the kinematics of the CLPT by including a gross
transverse shear deformation in its kinematic assumptions. The First-Order Shear Deforma-
tion Theory requires shear correction factors, which are difficult to determine. Second and
higher-order ESL laminated plate theories use higher-order polynomials in the expansion
of the displacement components through the thickness of the laminate. The higher-order
theories introduce additional unknowns that are often difficult to interpret in physical terms.
67
Composites are structural materials which consists of two or more combined constituents,
that is, the reinforcing phase and the one in which it is embedded is the matrix phase.
Monolithic metals and their alloys itself cannot always meet the demands of today’s
advanced technologies. By combining several materials, and forming composites, the per-
formance requirements can be met. Composites offer several advantages over conventional
materials, such as improved strength, stiffness, impact resistance, light weight parts, thermal
conductivity, and corrosion resistance. Even though the cost of composite materials is high,
the reduction in the number of parts and its lightweight nature, makes it an economical
option when compared to conventional materials.
A lamina is a thin film of a composite material. Laminating film is offered in a variety of
thicknesses, ranging from thin and flexible to thick and rigid. A laminate is constructed by
stacking a number of such laminae in the direction of the lamina thickness (Figure 1).
3 CLASSIFICATION OF COMPOSITES
68
Analysis of composite material is quite a difficult one. The properties of each layer vary, and as
a whole, the material properties should be analyzed. For this, various theories were formulated.
69
∂w0
u ( x, y, z, t ) = u0 ( x, y, t ) − z
∂x
∂w0
v ( x, y, z, t ) = v0 ( x, y, t ) − z (1)
∂y
w ( x, y, ) = 0 ( )
, z, t w x, y, t
where u0, v0, w0 are the displacements along the coordinate lines of a material point on the
xy-plane.
For the assumed displacement field in Equation 1, ∂∂wz = 0. The strain equation is:
2
∂u0 1 ∂w0 ∂ 2w0
ε xx = + − z
∂x 2 ∂x ∂x 2
1 ∂u ∂v ∂w ∂w ∂ 2w0
ε xy = 0 + 0 + 0 0 − z
2 ∂y ∂x ∂x ∂y ∂x ∂y
2
∂v 1 ∂w ∂ 2w
ε yy = 0 + 0 − z 20 (2)
∂y 2 ∂y ∂y
1 ∂w ∂w
ε xz = − 0 + 0 = 0
2 ∂x ∂x
1 ∂w0 ∂w0
ε yz = − + =0
2 ∂y ∂y
ε zz =0
70
(0 ) (1)
N xx A11 A12 A16 ε xx B11 B12 B16 ε xx
(0)
(1)
N yy = A12 A22 A26 ε yy + B12 B22 B26 ε yy (3)
N A A26
A66 γ xy B16
( 0 ) B26
B66 γ xy
(1)
xy 16
(0 ) (1)
M xx B11 B12 B16 ε xx D11 D12 D16 ε xx
(0)
(1)
M yy = B12 B22 B26 ε yy + D12 D22 D26 ε yy (4)
M B B26
B66 γ xy D16
( 0 ) D26
D66 γ xy
(1)
xy 16
where Aij is the Extensional Stiffness, Dij the Bending Stiffness, and Bij the Bending-Extensional
Coupling Stiffness, which are defined in terms of the Lamina Stiffness ( Qij(k ) ) as:
h
2 N zk +1
( ) ( )
Aij , Bij , Dij = ∫ Qij 1, z, z 2 dz = ∑ ∫ Qij(k ) 1, z, z 2 dz
k =1 zk
( )
−h
2
N
71
where u0, v0, w0, Øx, Øy are unknown functions to be determined, the nonlinear strains associ-
ated with the displacement field are:
2
∂u0 1 ∂w0 ∂Ø x
ε xx = + +z
∂x 2 ∂x ∂x
∂u0 ∂v0 ∂w0 ∂w0 ∂Ø x ∂Ø y
γ xy = + + + z +
∂y ∂x ∂x ∂y ∂y ∂x
2
∂v 1 ∂w ∂Ø y
ε yy = 0 + 0 + z (8)
∂y 2 ∂y ∂y
∂w0 ∂w
γ xz = + Ø x , γ yz = 0 + Ø y ,
∂x ∂y
ε zz = 0
The Strains (εxx, εyy, γxy) are linear through the laminate thickness, while the Transverse
Shear Strains (γxy, γyz) are constant through the thickness of the laminate in the FSDT.
∂u0 1 ∂w0 2 ∂Ø x
+
∂x 2 ∂x B ∂x
N xx A11 A12 A16 11 B12 B16
2
∂v0 1 ∂w0 ∂Ø y
N yy = A12 A22 A26 + +
B12 B22 B26 (9)
N A ∂y 2 ∂y B ∂y
xy 16 A26 A66 B26 B66
∂u ∂v ∂w ∂w 16 ∂Ø ∂ Ø
0
+ 0
+ 0 0
x+
y
∂y ∂x ∂x ∂y ∂y ∂x
∂u0 1 ∂w0 2 ∂Ø x
+
∂x 2 ∂x D ∂x
M xx B11 B12 B16 11 D12 D16
2
∂v0 1 ∂w0 ∂Ø y
M yy = B12 B22 B26 + + D12 D22 D26 (10)
M B ∂ y 2 ∂ y ∂ y
xy 16 B26 B66 D D26 D66
∂u ∂v ∂w ∂w 16 ∂Ø ∂ Ø
0+ 0+ 0
y
0
x+
∂y ∂x ∂x ∂y ∂y ∂x
∂w0
+ Øy
Qy A44 A45 ∂y
=K (11)
Qx A45 A55 ∂w0
+ Øx
∂x
4 3 ∂w
u ( x, y, z,t ) = u0 ( x, y,t ) + zØ x ( x, y,t ) − z Øx + 0
3h 2 ∂x
4 3 ∂w0
v ( x, y, z,t ) = v0 ( x, y,t ) + zØ y ( x, y,t ) − 2 z Ø y + (12)
3h ∂y
w ( x, y, z,t ) = w0 ( x, y,t )
(0 ) ε xx(1) (3)
ε xx ε xx ε xx
(0 ) (1) 3 (3)
ε yy = ε yy + z ε yy + z ε yy
γ (0) (1) (3)
xy γ xy γ xy γ xy (13)
( 0) ( 2)
γ yz γ yz 2 γ yz
= (0 ) + z (2 )
γ xz γ xz γ xz
4
where c1 = 3c1 and c1 = 2
3h
∂uo 1 ∂wo 2
+
∂x 2 ∂x
( 0)
ε xx 2
( 0) ∂vo 1 ∂w
wo
ε yy = + (14)
γ ( 0) ∂y 2 ∂y
xy ∂uo ∂vo ∂wo ∂wo
+ +
∂y ∂x ∂x ∂y
∂∅ x
( 1)
ε xx ∂x ∂wo
( 0) ∅ y +
( 1) ∂∅ y
γ
yz ∂y
ε yy = , ( 0 ) = (15)
γ ( 1) ∂y γ xz ∅ + ∂wo
xy ∂∅ ∂∅ y
x
∂x
x
+
∂y ∂x
73
To validate these various theories, a model of pressure was examined analytically and
numerically by the use of Ansys16.0 software:
5.3.2.1 Analytical solution
Here, D was the Mean Diameter of a vessel (450 mm), t was Thickness of the vessel (10 mm),
and the length of the vessel was 4 m. The Pressure was P = 10 kN.
PD 10 × 450
Hoop stress: σ 0 = = = 225 MPa
2t 2 × 10
E = 200000 Mpa µ = 0.3
Figures 2 and 3 shows the deformed and stress at x direction. Taking values at the central
nodes of the cylinder, the hoop stress values obtained were:
Analytical From %
Stress solution software variation
74
6 CONCLUSIONS
Composite materials are using widely today because of their improved strength, stiffness,
reduced weight, reduced life cycle cost and so on. Composites are now extensively being used
for rehabilitation/strengthening of preexisting structures that have to be retrofitted to make
them seismic resistant, or to repair damage caused by seismic activity.
Composites form a new material with improved properties above that of the individual
materials themselves. The fabrication of composite and its analysis needed a careful study
to explain its structural properties. Various theories were used in the analysis of composite
materials; among them Equivalent Single Layer (ESL) theories were used to explain the plate
composites.
The ESL theory includes the Classical Lamination Plate Theory (CLPT), the First-Order
Shear Deformation Theory (FSDT), and the Higher-Order Shear Deformation Theory
(HSDT).
CLPT is less accurate as it neglects the transverse shear effects. To overcome these limita-
tions, FSDT was proposed. It requires the use of a shear correction factor in order to satisfy
the traction free boundary conditions at top and bottom surfaces of the plate. The accuracy
of the response by FSDT strongly depends upon the choice of shear correction factors. To
overcome the limitations of FSDT, HSDT involving a transverse shear stress function, was
developed. The HSDT introduced additional unknowns.
By examining numerically a model for a pressure vessel using Ansys APDL 16.0, the theo-
ries were validated. This problem was based on a First-Order Shear Deformation Theory.
REFERENCES
Auricchio, F. & Sacco, E. (2003). Refined first-order shear deformation theory models for composite
laminates. Journal of Applied Mechanics, 70, 381.
Jauhari, N., Mishra, R. & Thakur, H. (2017). Failure analysis of fibre-reinforced composite laminates.
Materials Today: Proceedings, 4, 2851–2860.
Kant, T. & Swaminathan, K. (2002). Analytical solutions for the static analysis of laminated composite
and sandwich plates based on a higher order refined theory. Composite Structures, 56, 329–344.
Kaw, A.K. (2006). Mechanics of Composite Materials (2nd ed.).: (London, New York), Taylor &
Francis.
Kharghani, N. & Guedes Soares, C. (2016). Behaviour of composite laminates with embedded
delaminations. Composite Structures 150, 226–239.
Kumar, A. & Chakrabarti, A. (2017). Failure analysis of laminated skew laminates. Procedia Engineer-
ing, 173, 1560–1566.
Rastgaar Aagaah, M., Mahinfalah M., Mahmoudian, N. & Nakhaie Jazar, G. (2002). Modal analysis of
laminated composite plates using third order shear deformation theory.
Reddy, J.N. (1984). A simple higher-order theory for laminated composite plates. Journal of Applied
Mechanics, 51, 745.
Reddy, J.N. (2004). Mechanics of laminated composite plates and shells theory and analysis (2nd ed.).
(London, New York): CRC Press LLC.
Shokrieh, M.M., Akbari, S. & Daneshvar, A. (2013). A comparison between the slitting method and
the classical lamination theory in determination of macro-residual stresses in laminated composites.
Composite Structures, 96, 708–715.
Singh, D.B. & Singh, B.N. (2017). New higher order shear deformation theories for free vibration and
buckling analysis of laminated and braided composite plates. International Journal of Mechanical
Sciences 131, 265–277.
75
ABSTRACT: This paper presents the spatio-temporal trends and dominant change points
in the Regional Climate Model system (RegCM) simulations at 0.43° × 0.43° resolution over
Teesta river basin located in Sikkim. In this study, first the bias correction is done following
the Local Intensity Scaling (LOCI) method by comparing the interpolated model values with
the observed values of five stations for the historical period of 1983–2005. The estimated
correction is applied to the interpolated future RegCM data for the rainfall projections
of 2021–2050 period under two Representative Concentration Pathway (RCP) scenarios
(RCP4.5 and RCP8.5). The historical rainfall and rainfall projections of the near future period
of 2021–2050 are subjected to trend analysis using Mann-Kendall method and its sequential
version (SQMK), Sen’s slope estimator, linear trend fitting and Ensemble Empirical Mode
Decomposition methods. The preliminary estimates using non-parametric tests showed a
likely reversal in nature of trend in Gangtok and Kalimpong stations. The results of SQMK
test detected possible trend turning points in 2030s except in future rainfall of Darjeeling
station. The results of non-linear trend analysis showed that the future rainfall of Jalpaiguri
shows an increasing trend while that of Gangtok shows decreasing trend irrespective of the
two candidate RCP scenarios. The non-linear trend is different from linear trend in rainfall of
Lachung and Kalimpong stations under both scenarios and that of Jaipalguri station under
RCP8.5 scenario. The extraction of true shape of inherent non linear trend performed in this
study may help for improved predictability and better management of water resources of
Teesta river basin.
1 INTRODUCTION
Trend analysis of rainfall is a major concern among the hydrologists because of the recent
evidences of global climate change. Performing such analysis for projected climate scenarios
of a river basin or geographical region of interest can help in better preparedness against
natural disasters like droughts. A large number of studies were conducted to detect the trends
of hydro-climatic variables in different parts of India at subdivisional scale and river basin
scale (Guhatakurta and Rajeevan 2007; Kumar et al., 2010; Jain and Kumar 2013; Jain et al.,
2013; Adarsh and Janga Reddy 2015). The north eastern part of India and eastern Hima-
layan region are vulnerable to climatic change effects and scientific community is consistently
monitoring such changes. For example, Jain et al. (2013) performed the trend analysis of
for the historical records of 1871–2008 period of rainfall in different subdivisions in north
eastern India. Singh and Goyal (2016a,b, 2017) focused on the climate change projections
of eastern Himalayan regions and upper Teesta watershed using different datasets and tech-
niques. However, they never attempted a formal trend analysis of projected rainfalls.
79
2 METHODOLOGY
This procedure allows the scenario run to have a different wet-day frequency than the
control run. In our study, the amount of precipitation on these days was not redistributed
to the remaining rainy days.
2. In a second step, a linear scaling factor is estimated based on the long-term monthly
mean wet-day intensities. Taking only wet days into account (i.e., the observed days with
80
*
Pscen (d ) = Pscen
*1
(d ).s (5)
By definition, the adjusted control and scenario precipitation both have the same mean,
wet-day frequency and intensity as the observed time series.
81
Teesta river basin (latitude of 26°42’N - 28°06’N, and longitude of 88°03’E - 88°58’E), a cli-
matically sensitive basin located within the Eastern Himalayas, Sikkim, India is selected for
the present study. Major portion of the basin is hilly region having an elevation ranging from
285 m to 8586 m. It is bounded by Himalayas in north, east and west. Out of the total area
(11,650 km2) nearly 60% of the catchment area occupies in Sikkim, while rest of the parts
falls in Darjeeling district of West Bengal and northern part of Bangladesh. The observed
rainfall data of stations Gangtok, Jalpalguri, Kalimpong, Lachung and Darjeeling for 1983–
2005 are collected from India Meteorological Department (IMD). It is noted that the north
western part of the catchment is mostly fed by snow and glaciers (Singh and Goyal 2017)
and raingauge installations are scarce in this part of the basin which is unique character of
the basin. The daily rainfall data for three different RCMs (HadGEM3-RA, YSU-RSM,
RegCM) for historical period and for projection 2021–2050 are collected from Coordinated
Regional Climate Downscaling Experiment (CORDEX) database (http://www.cordex.org/).
In this study, first the grid point values of CORDEX database are interpolated to the gauge
points using the inverse distance weighted (IDW) method as the grid points of former data-
base is not coinciding with that of latter. The bias between RCM simulations and observed
rainfall are corrected by LOCI method.
The monthly averaged values of corrected mean precipitation of the basin for historical
period (1983–2005) are first checked for quality assessment by comparing the monthly mean
rainfall of three candidate RCM simulations with observed data. It was assessed that among
different RCMs the interpolated values of RegCM is showing the best similar pattern of
actual observed rainfall. So in this study, only RegCM data is considered for subsequent
analysis. The LOCI corrections applied for different monthly values are given in Figure 2.
It is noted that the corrections for rainfall are multiplying factors, which are observed to be
positive for all the five stations. The largest correction is obtained for Lachung station (for
the month of May), whereas the smallest correction is obtained for Jaipalguri (for the months
of December and February).
The interpolated future monthly rainfall values are estimated for all the stations for
2021–2050 period, applying the respective corrections for two representative concentration
pathways (RCP) scenarios RCP4.5 and RCP8.5.
82
Table 1. Results of MK test and SS estimator of trend analysis of historical and projected rainfall of
different stations in Teesta basin.
In the trend analysis, the MK test and Sen’s slope tests of the historical datasets (1983–
2005), rainfall projections under RCP4.5 and RCP8.5 scenario of all the five stations are
performed at 5% significance level. The results are presented in Table 1.
From Table 1 it is noted that the trend of historical rainfall records and projections under
both scenarios of Lachung station are alike and shows a decreasing trend. Also, in the his-
torical period, the rainfall of Kalimpong and Gangtok was showing an increasing trend while
there is a likely reduction of rainfall in these stations in the near future (2021–2050). But it is
to be noted that none of the trend is significant at 5% level.
To capture the variability of trend in different decades of near future period and for locat-
ing the trend turning points, an indepth sequential trend analysis is made using SQMK test.
The details of SQMK test can be found elsewhere (Sneyers 1990; Adarsh and Janga Reddy
2015). The results of SQMK test of rainfalls under RCP4.5 are provided in Figure 3 and that
of rainfalls under RCP8.5 sceanario are provided in Figure 4.
The results of SQMK test showed that the rainfall trend of none of the stations is not
significant under RCP4.5 and RCP8.5 scenarios in near future except for some localized time
spells under RCP8.5 scenarios for the data of Kalimpong and Gangtok during 2040–45. But
the rainfall trend in all of the stations shows a possible reduction since 2030s. The trend of
rainfall under both of the scenarios of Darjeeling shows an erratic pattern and there is no
definite trend turning point could be identified for the rainfall data of this station. The possi-
ble trend turning points of Kalimpong and Gangtok occurs nearly between 2030 and 2035.
Further the linear trend fitting of different rainfall series is made and the non-linear trend
is extracted for by invoking the EEMD algorithm. In the EEMD implementation a noise
standard deviation of 0.2 and ensemble size of 300 is used. In all cases 4 or 5 modes evolved
which represents the rainfall variability, which is less than the maximum number expected
log(N), where N is the data length (23 for historical and 30 for projected rainfall in this study).
83
Figure 4. Results of SQMK test of rainfall projections of different data for RCP8.5 scenario. The verti-
cal bar show a trend turning point. u(t) refers the progressive series and u’(t) refer the retrograde series.
Figure 5. Linear and non-linear trend of rainfall projections of Teesta basin for 2021–2050 period
under RCP4.5 scenario. The upper panels of each case show the data along with linear trend fitting. The
lower panel shows the corresponding non-linear trend.
Also only one component evolved with the characteristics of the ‘residue’ and the rest of
them possesses the properties of an IMF (see Huang et al., 1998). Therefore it is believed that
the decomposition is reliable and not a case of ‘over decomposition’.
The results of liner trend and EEMD based non-linear trend of rainfall projections under
RCP4.5 and RCP8.5 are presented in Figures 5 and 6 respectively.
Figure 5 show that the linear trend of rainfall projections of Kalimpong, Darjileeng and
Jaipalguri stations display an increasing trend while for the rest of the stations the trend is
decreasing. On examining the corresponding non-linear trend, it is noticed that the non-
linear trend matches with the linear trend except for Lachung and Kalimpong stations.
Rainfall trend of Kalimpong station shows decreasing trend till 2034 thereafter there is a
likely increase. Figure 6 show that the linear trend of rainfall projections of all the stations
displays a decreasing character but the true non-linear trend of rainfall projections of
different stations show a distinctly different character. The trend of Darjeeling and Gangtok
is monotonically decreasing, while that of Jaipalguri is increasing. The rainfall trend of
84
Lachung and Kalimpong shows a reduction upto 2030s and thereafter it is increasing. There
is a high level of geographical heterogeneity exists in the region and which is reflected in the
precipitation trends.
The trend analysis showed that there are distinct differences in the projected precipitation
pattern of Teesta basin. There exists large differences in elevation within the basin, with high
altitudinal zones (elevation of ∼ 7000 m) in the upstream portion and low altitude zones
(elevation as low as 1400 m) in the lower catchment. Hence within the study area, the major
driving factor behind the precipitation (such as orography or convection) differs significantly,
which reflected in the scenario based precipitation projections.
The EEMD method is superior in trend analysis of hydro meteorological variables, as it
provides information of non-linear trend, which may improve the predictability efforts. The
present study gives broad inferences of changing climate of Teesta basin under two RCP
scenarios. The study provides a background for hydrologic data generation, detailed inves-
tigations on climate change, framing adaptation policies and hence it may help for overall
management of water resources of Teesta basin in north east India.
5 CONCLUSIONS
This paper presented the trend analysis of rainfall projections of Teesta basin in north-east
India under RCP4.5 and RCP8.5 scenarios using the MK test, Sen’s slope estimator, sequen-
tial MK test, linear fitting and EEMD methods. The important conclusions framed from the
study are:
The non-parametric tests detected an increasing trend in the past and there is a likely
reduction in future rainfall of Kalimpong and Gangtok stations, while the nature of trend
of rest of the stations remains un altered
There is no possible trend turning point in Darjeeling station in either of the RCP sce-
narios in near future during 2021–2050
The trend turning points are noticed in similar years (2030s) in the rainfall of Kalimpong
and Gangtok stations under both RCP scenarios
There is a high level of geographical heterogeneity exists in the study area and which inturn
influences the major driver behind precipitation. It resulted in the differences in the trends
in projected precipitation of the basin
The non-linear trend is different from linear trend in rainfall of Lachung and Kalimpong
stations under both of the RCP scenarios and that of Jaipalguri station under the
RCP8.5 scenario
The true trend of rainfall projections of Kalimpong station under both RCPs and that
in rainfall projections of Lachung station under RCP8.5 scenario displayed distinct non-
linearity. The EEMD method captured such trends successfully and it may eventually help
in accurate simulation or forecasting of rainfall of these stations.
85
Adarsh, S. & Janga Reddy, M. 2015. Trend analysis of rainfall in four meteorological subdivisions in
Southern India using non parametric methods and discrete wavelet transforms. International Journal
of Climatology 35.6: 1107–1124.
Carmona, A.M. & Poveda, G. 2014. Detection of long-term trends in monthly hydro-climatic series of
Colombia through Empirical Mode Decomposition. Climatic Change 123.4: 301–313.
Chatterjee S., Bisai D. & Khan A. 2012. Detection of approximate potential trend Turning Points in
temperature time series (1941–2010) for Asansol weather observation station, West Bengal, India.
Atmospheric and Climate Sciences 4: 64–69.
Franske, C.L. 2014. Non-linear climate change. Nature Climate change 4: 423–424.
Guhathakurta, P. & Rajeevan, M. 2007. Trends in the rainfall pattern over India. International Journal
of Climatology, 28.11: 1453–1469.
Huang, N.E., Shen, Z., Long, S.R., Wu, M.C., Shih, H.H., Zheng, Q., Yen, N.C., Tung, C.C., & Liu,
H.H. 1998. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-
stationary time series analysis. Proceedings of Royal Society London, Series A. 454: 903–995.
Huang, N.E., & Wu, Z. 2008. A review on Hilbert Huang Transform: Method and its applications to
geophysical studies. Reviews of Geophysics 46(2), doi: 10.1029/2007RG000228.
Jain, S.K. & Kumar, V. 2012. Trend analysis of rainfall and temperature data for India-A Review.
Current Science, 102.1: 37–49.
Jain, S.K., Kumar V. & Saharia, M. 2013. Analysis of rainfall and temperature trends in north-east
India”, International Journal of Climatology 33.4: 968–978.
Kendall, M.G. 1975. Rank Correlation Methods, 4th Edition, Charles Griffin, London, UK, 1975.
Kumar, V., Jain S.K. & Singh, Y. 2010. Analysis of long-term rainfall trends in India. Hydrological
Sciences Journal 55.4: 484–496.
Mann, H.B. 1945. Non-parametric tests against trend. Econometrica, 13.3: 245–259.
Sang, Y.F., Wang Z., & Liu, C. 2014. Comparison of the MK test and EMD method for trend
identification in hydrological time series. Journal of Hydrology, 510: 293–298.
Sang, Y.-F., Wang Z. & Liu C. 2013. Discrete wavelet-based trend identification in hydrologic time
series. Hydrological Processes 27(14): 2021–31.
Sang Y.-F., Sun F., Singh V.P., Xie P. & Sun J. 2017. A discrete wavelet spectrum approach to identifying
non-monotonic trend pattern of hydro-climate data. Hydrology and Earth System Science Discussions
doi:10.5194/hess-2017-6.
Sen, P.K. 1968. Estimates of the regression co-efficient based on Kendall’s tau. Journal of the American
Statistical Association 63: 1379–1389.
Singh, V. & Goyal, M.K. 2016a. Changes in climate extremes by the use of CMIP5 coupled climate
models over eastern Himalayas. Environmental Earth Science 75: 839.
Singh, V. & Goyal, M.K. 2016b. Analysis and trends of precipitation lapse rate and extreme indices
over north Sikkim eastern Himalayas under CMIP5ESM-2M RCPs experiments. Atmospheric
Research 167 (2016): 34–60.
Singh, V. & Goyal, M.K. 2017. Spatio-temporal heterogeneity and changes in extreme precipitation over
eastern Himalayan catchments India. Stochastic Environmental Research and Risk Assessment. DOI
10.1007/s00477-016-1350-3.
Sneyers, R. 1990. On the statistical analysis of series of observations. Tech. Note 143, 192 pp., Geneva,
Switzerland.
Sonali, P. & Nagesh Kumar, D. 2013. Review of trend detection methods and their application to detect
temperature change in India. Journal of Hydrology 476: 212–227.
Torres, M.E., Colominas, M.A., Schlotthauer, G. & Flandrin, P. 2011. A complete ensemble empirical
mode decomposition with adaptive noise. IEEE International conference on Acoustic Speech and
Signal Processing, Prague 22–27 May 2011, pp. 4144–4147.
Unnikrishnan, P. & Jothiprakash, V. 2015. Extraction of nonlinear trends using singular spectrum
analysis. Journal of Hydrologic Engineering 10.1061/(ASCE)HE.1943-5584.0001237, 05015007.
Widmann, M. & Bretherton, C.S. 2000. Validation of mesoscale precipitation in the NCEP reanalysis
using a new grid cell dataset for the Northwestern United States. Journal of Climate, 13(11): 1936–1950.
Wu, Z. & Huang, N.E. 2005. Ensemble Empirical Mode Decomposition: A noise-assisted data analysis
method, Centre for Ocean-Land-Atmospheric Studies Tech. Rep. 193, Cent. for Ocean-Land-Atmos.
Stud., Calverton, Md. 1–51 (ftp://grads.iges.org/pub/ctr/ctr_193.pdf ).
Wu, Z., Huang, N.E., Long, S.R. & Peng, C.K. 2007. On the trend, detrending and variability of
nonlinear and non-stationary time series. Proceedings of National Academy of Science USA, 104(38):
14889–14894.
86
ABSTRACT: Coastal systems are dynamic environments which comprise three main inter-
related components namely; morphology, sediments and forcing parameters. Numerical
models for shoreline evolution are useful tools in establishing trends and forecasting shoreline
position scenarios for decadal temporal scales. In the present study, an attempt was made to
study beach morphological changes in the Valiathura–Poonthura stretch of the Trivandrum
coast using LITPACK software. Sediment transport, profile change and shoreline evolution
were simulated using different modules of the LITPACK model. The southern-most region
of the stretch is undergoing severe erosion during the monsoon. Coastline evolution for a
period of ten years implies that the coast is undergoing erosion and the beach is declining
eastward. An average approximate decline of 30 m was observed for the shoreline over the
period 2005–2015. A numerical model was applied to analyze the best layout of protective
structures and found that groynes with a spacing of twice the length was effective for sedi-
ment trapping and shoreline development when compared to detached breakwaters.
1 INTRODUCTION
Complex and diverse types of natural processes that occur on the coastal zone bring physi-
cal, chemical and biological changes to fragile coastlines. The coastline of India is undergoing
changes due to several human interventions. Shore erosion is currently causing damage to shore-
lines and public properties, not only along the coast of Kerala, but also around the world. There
are various coastal protection methods like seawalls, groynes, detached breakwaters and so on.
The present study deals with studies on beach morphological changes on the Valiathura–
Poonthura coastal stretch using LITPACK software. Various configurations of breakwaters
and groyne fields were analyzed to identify the best suited structure for coastal protection.
Shoreline change is a complex process which depends upon various factors like tempera-
ture, wind velocity, wave climate, sediment properties, cross-shore profile and so on. Develop-
ment of an accurate model for shoreline change is a difficult process which requires a large
amount of data and a considerable amount of time and effort.
Thach et al. (2007) studied shoreline change by using a LITPACK mathematical model
in Cat Hai Island, Vietnam. According to the simulated and calculated results, the selected
protected construction system, which includes revetments, T-shape sand preventive construc-
tions and submerged breakwaters, is the most suitable and reasonable counter measure for
Cat Hai shoreline stabilization.
Shamji et al. (2010) studied the application of numerical modeling of morphological
changes in a high-energy beach during the south-west monsoon. The LITPROF module of
the LITPACK software was found to accurately simulate beach morphological changes by
adjustment of the calibration parameters. The model performance, computed using different
statistical methods, was found to be good.
Christy Paul and Abilash. P. Pillai (2014) conducted a study on shoreline evolution due to
the construction of rubble-mound breakwaters at Munambam inlet. They found that accre-
tion dominates shoreline changes and there was a net advance in the shoreline. Thus, the break-
waters were found to be very effective in trapping the littoral sediments along the shoreline.
87
2 METHODOLOGY
Coastal processes such as shoreline changes, nearshore waves, long-shore currents and sedi-
ment characteristics were studied to understand various morphological changes including
coastal erosion along the Valiathura coast. The data collected provided the input for numeri-
cal model studies. The input data included wave climate data, initial coastline, initial cross-
shore profile, sediment characteristics and so on. Wave data including wave height, wave
period and wave direction for the period 1981–1984 were collected. As there was no signifi-
cant variation in wave characteristics over these years, the same data was used for simula-
tions for the year 2005. The cross-shore profile and sediment characteristics, defined from
a depth of 5.4 m up to 3.2 m above mean surface level for 2005 was obtained from Shamji
et al (2010). Initial shoreline position was obtained from Google Earth, ArcGIS and MIKE
Zero. A baseline, which is nearly parallel to the coastline, was drawn in the georeferenced
image of the study area. The baseline is 2 km long and it is divided into grids of 100 m
length. The distance from baseline to coastline from each grid point is measured in ArcGIS.
The numerical modeling of shoreline change was done using LITPACK. The governing
equations were solved using a finite difference approach and the software computes wave,
shoaling, refraction, diffraction and the resulting sediment transport at each time step for
each grid point. The governing equation for sediment transport is shown in Equation (1):
d du ds
τb − ρED = − xy + τ w + τ cur (1)
dy dy dy
where τb is bed shear stress due to the long-shore current, ρ is density of water, E is momen-
tum exchange coefficient, D is water depth, u is long-shore current velocity, y is shore-normal
coordinate, sxy is shear component of the radiation stress, and τw & τcur are driving forces due
to wind and coastal current.
The cross-shore profile model describes cross-shore profile changes based on a time series
of wave events. The main assumptions in the profile change model are that long-shore gradi-
ents in hydrodynamic and sediment conditions are negligible and depth contours are parallel
to the coastline. Coastal morphology is described by the cross-shore profile. Wave transfor-
mation across the profile is calculated including the effects of shoaling, refraction, bed fric-
tion and wave breaking. Bed level change is described by a continuity equation for sediment:
∂h 1 ∂ qs
=− (2)
∂t 1− n ∂x
where h is bed level, qs is cross-shore transport and n is porosity. The boundary condition is
that the sediment transport is zero at the coastline. The shore line change model solves the
continuity Equation (2) for the coastline:
∂y c 1 ∂Q Qsou
=− + (3)
∂t h act ∂x hact ∆x
where yc is the distance from the baseline to the coastline, T is time, hact is height of the active
cross-shore profile, Q is long-shore transport of sediment expressed in volumes, x is long-
shore position, ∆x is long-shore discretization step and Qsou is the source/sink term expressed
in volume/∆x.
88
3 NUMERICAL MODELING
89
Littoral transport was calculated using the LITDRIFT module. It was seen that from June
to August, the beach was eroded with high rate of sediment transport. Maximum erosion
occurred in the month of June and the corresponding sediment transport was 99971.448 m3/
month. Sediment transport during the entire beach building period extending from September
to April, was toward the north with maximum values in September and October. The calcu-
lated net and gross transport were 0.1155 × 105 m3/year and 0.4059 × 105 m3/year, respectively.
The validation of LITPROF was carried out for Valiathura for the peak monsoon period
of mid-June 2005. High erosion was observed during this period as seen in the beach profiles
presented in Figure 3 The berm was completely eroded and deposited as a bar offshore. The
input data for simulations included wave parameters, cross-shore beach positions, berm char-
acteristics and so on. To minimize the difference between measured and computed values,
the model was run for many cases. The main calibration parameter was the scale factor. The
model was run for scale factors of 0.8, 0.9 and 1. The simulated profiles for each scale factor
after 14 days was compared with the actual cross-shore profile (Shamji et al., 2010).
After validating the profile change model, the profile change evolution for short term were sim-
ulated. The predicted profile at the end of a 14-day period from June 4, 2007 is given in Figure 3.
The simulated profile shows high erosion at the beach face and subsequent deposit of sedi-
ment in the offshore region. The simulated profile shows the characteristics of a bar profile,
due to the impact of monsoon waves. Simulation results from LITPROF were compared with
simulation results from the profile change model (Shamji et al., 2010) for the same period,
which is given in Figure 4. It can be seen that the LITPROF model output is quite compara-
ble to the profile change model output.
Shoreline evolution has been modeled using the LITLINE module. The model is validated
for one year. The actual shoreline for 2005 was obtained from Google Earth which was used
as the initial coastline. The shore line after one year was also obtained from Google Earth,
which was used as the final coastline for validation. The input data for simulations included
initial coastline, wave parameters, cross-shore beach positions and so on. In LITLINE, the
90
Figure 7. Shore line change due to effect of Figure 8. Shore line change due to effect of
40 m length groyne field with 2 L spacing. 50 m length groyne field with 2 L spacing.
91
Figure 11. Shore line change due to effect of Figure 12. Shore line change due to effect of
100 m length, 200 m spaced detached breakwater 100 m length, 200 m spaced detached breakwater
at 80 m from baseline. at 100 m from baseline.
92
It was concluded that the selected region undergoes minor accretion for the first five months
of the year. For the remaining six and a half months, the coast undergoes severe erosion and
the maximum erosion occurs in the month of June. The calculated net and gross transport
are 0.1155 × 105 m3/year and 0.4059 × 105 m3/year. During the monsoon it was found that the
beach profile undergoes a high erosion at the beach face, with subsequent deposit of sedi-
ment at an offshore region resulting in bar formation.
Coastline evolution for a period of ten years of the Valiathura region implies that the
coast is undergoing severe erosion and hence, the beach is declining eastward. An average
approximate decline of 30 m was observed for the shoreline over the period 2005–2015. The
southern-most region of the coast is subject to severe erosion. It was found that groynes offer
a better solution for the coastline erosion of the Trivandrum coast with respect to sediment
trapping efficiency as well as shoreline development when compared to breakwaters. It was
also observed that spacing between groynes equal to twice the length is a better option com-
pared to spacing of 2.6 times the length.
REFERENCES
Noujas, V., Thomas, K.V., Sheela Nair, L.S., Hameed, T.S., Badarees, K.O. & Ajeesh N.R. (2014). Man-
agement of shoreline morphological changes consequent to breakwater construction. Indian Journal
of Geo-Marine Sciences, 43(1), 54–61.
Paul, C., Pillai. C.A. (2014). Shoreline evolution due to construction of rubblemound breakwaters at
Munambam inlet. International Conference on Innovations & Advances in Science, Engineering and
Technology [IC—IASET 2014], 3(5), 462–467.
Shamji, V.R., Kurian, N.P., Thomas, K.V. & Shahul Hameed, T.S. (2010). Application of numerical
modelling for morphological changes in ahigh-energy beach during the south-west monsoon. Current
Science, 98(5), 691–695.
Thach, N.N., Truc, N.N. & Hau, L.P. (2007). Studying shoreline change by using LITPACK mathemati-
cal model (case study in Cat Hai Island, Hai Phong City, Vietnam.VNU Journal of Science, Earth
Sciences, 23, 244–252.
93
P. Dhanya
WRHI, Government Engineering College, Thrissur, India
Reeba Thomas
Department of Civil Engineering, Government Engineering College, Thrissur, India
ABSTRACT: River discharge is a critical component of the global water cycle and much
concern has been raised regarding the changes it has experienced in recent years due to
unprecedented changes in temperature and precipitation. The Soil and Water Assessment
Tool (SWAT) was used for hydrologic modeling of the Thoothapuzha basin to compute the
effect of climate change on stream flow by simulating the model using dynamically down-
scaled climate data. Future changes in climate were assessed using Global Climate Mod-
els (GCMs). A dynamic downscaling model was applied to reduce large-scale atmospheric
variables in the GCMs into localized weather variables. Climate data were dynamically
downscaled using a Regional Climate Model (RCM) REMO2009 under the driving GCM,
The Max Planck Institute for meteorology Earth System Model-Low Resolution (MPT-
ESMLR) for a future period of 2016–2030. An emission scenario of RCP 4.5 was consid-
ered. A delta change method of bias correction was applied to the downscaled data in order
to get the corrected daily weather data. The model was calibrated and validated and the
performances evaluated using historic weather data (1974–2012) in terms of R2, Percentage
Bias (PBIAS), Nash-Sutcliffe Efficiency (NSE), and the values obtained were 0.79, 6.829 and
0.78, respectively, which shows that this basin could very well be modeled in SWAT. Stream
flow forecasting was carried out with bias corrected RCM data in a calibrated SWAT model.
The decreasing trend in the forecasted stream flow for the period of 2016–2030 under RCP
4.5 shows the impact of increased greenhouse gas emissions and the necessity of remedial
measures to be taken in order to preserve our water resources for the coming generations.
1 INTRODUCTION
Climate change is expected to alter the timing and magnitude of runoff, which has significant
implications for current and future water resource planning and management. A rise in mean
global temperature due to increased CO2 emissions indicates a major human induced climate
change which can cause alteration in the hydrologic cycle. This phenomenon has had discern-
ible impacts on the physical, ecological and biological systems of the earth. Climate change
is expected to adversely impact water resources, water quality and freshwater ecology. The
evolution of future greenhouse gases is highly uncertain. The Special Report on Emission
Scenarios (SRES) by Inter governmental Panel on Climate Change (IPCC) establishesdif-
ferent future world development possibilities and the corresponding CO2 emissions in the
twenty-first century, taking into consideration the possible changes in various factors includ-
ing economic development, technological development, energy intensities, energy demand,
and structure of energy use, resources availability, population change and landuse change.
Global Climate Models (GCMs) have evolved from the Atmospheric General Circulation
Models (AGCMs) widely used for daily weather prediction. This is widely used to forecast
future weather parameters under the influence of the increasing atmospheric carbon dioxide
(CO2) predicted by IPCC. Most GCMs neither incorporate nor provide information on scales
smaller than a few hundred kilometers. It is possible to model small scale interactions and
95
Nomenclature
The main objective of this study was to assess the impact of climate change on stream flow
in the Thoothapuzha basin by conducting hydrologic modeling in SWAT. For this, model
calibration and validation were first carried out using historical daily weather and rainfall
data (1974–2014). Bias corrected dynamically downscaled GCM weather data were used to
forecast if the future runoff in Thoothapuzha corresponds to emission scenario RCP 4.5 and
the change in runoff was analyzed.
96
4 METHODOLOGY
P*cont(d) = Pobs(d) (1)
P*scen(d) = Pobs(d) × (µm(Pscen(d)))/(µm(Pcont(d))) (2)
T*scen(d) = Tobs(d) (3)
T*scen(d) = Tobs(d) + [µm(Tscen(d))–µm(Tcont(d))] (4)
where, P*cont(d) is the final bias corrected precipitation in the control period, Pcont(d) is
the precipitation in the control period, Pobs(d) is the observed daily precipitation, µm, mean
within monthly interval, Pscen(d) is the precipitation in future scenario and P*scen(d) is the
final corrected precipitation in the future scenario.
97
98
99
6 CONCLUSION
Rainfall runoff modeling was successfully calibrated and validated using a SWAT model for
the Thoothapuzha basin. The simulated flow at the basin outlet was compared with the
observed flows in order to carry out calibration and validation of the model. The obtained
values for NSE, R2, PBIAS were satisfactory and within the range. Thus, hydrologic mod-
eling of the Thoothapuzha basin for the period 1979–2014 was successfully completed, with
NSE, PBIAS and R² values of 0.78, 6.89 and 0.79, respectively. Stream flow in the Thooth-
apuzha basin was predicted by using dynamically downscaled climate data for emission sce-
nario RCP 4.5, which is characterized by a medium rate of expected CO2 emission. In this
scenario, the trend analysis of forecasted discharge values shows a decreasing trend in their
magnitude in both monsoon and summer seasons, which can be considered as a warning sig-
nal of the impending reduction in runoff in the coming years and the need to take remedial
measures to address this.
REFERENCES
Amin, M.Z.M., Shaaban, A.J., Ohara, N., Kavvas, M.L., Chen, Z.Q., Kure, S. & Jang, S. (2015). Climate
change assessment of water resources in Sabah and Sarawak, Malaysia, based on dynamically-down-
scaled GCM projections using a regional hydro climate model. Journal of Hydrologic Engineering,
21(1), 05015015(9).
IPCC SPECIAL REPORT EMISSIONS SCENARIOS (2000). ISBN: 92-9169-113-5.
Johnson, T.E., Butcher, T.B., Parker, A. & Weaver C.P. (2012). Investigating the sensitivity of U.S. stream
flow and water quality to climate change: U.S. EPA Global Change Research Program’s 20 Water-
sheds Project. Journal of Water Resources Planning and Management, 138(5).
Manu Sharma, PaulinCoulibaly and Yonas Dibike (2011). Assessing the need for downscaling RCM
data for hydrologic impact study. American Society of Civil Engineers, 16(6), 534–539.
Negash Wagesho, Jain, M.K. & Goel, N.K. (2013). Effect of climate change on runoff generation: Appli-
cation to rift valley lakes basin of Ethiopia. American Society of Civil Engineers, 18(8), 1048–1063.
Neithisch, S.L., J.G. Arnold, J.R. Kiniry, R. Srinivasan & J.R. Williams (2002). SWAT User guide
(2000). Centre for Climate Change Research cccr.tropmet.res.in/home/ftp_data.jsp.
100
ABSTRACT: Transient flow is the intermediate stage flow when the flow conditions are
changed from one steady state to another, caused by the sudden changes in the operating
conditions, such as abrupt closure or opening of the control valve and starting or stopping
of pumps. In liquid-filled pipe systems, such disturbances create pressure waves of large
magnitude which travel to and fro. The propagation and reflection of these pressure waves
is referred to as hydraulic transients or ‘water hammer’, which can either rise or drop the
normal working pressure of the pipe system. Cavitation (which is the formation, growth and
collapse of vapor bubbles in a flowing liquid) can result in a region when the pressure of the
liquid drops below its vapor pressure. Pipe systems also experience severe dynamic forces dur-
ing transient flow due to fluid structure interaction. Fluid induced structural motion, struc-
ture induced fluid motion, and the underlying coupling mechanisms, are commonly referred
to as Fluid Structure Interaction (FSI). This study aims to review a sample of the relevant lit-
erature on the main related topics, such as water hammer, cavitation, FSI and viscoelasticity.
1 INTRODUCTION
For any piping system, sudden operating conditions, such as closing or opening of the valve,
and starting and stopping of the pump, will induce transient flow. Subsequently, it causes
‘water hammer’, cavitation, Fluid Structure Interaction (FSI), and energy dissipation due
to viscoelasticity. A general review of these phenomena is discussed in the following sections
which include numerical and experimental investigations carried out over the past decades.
2 WATER HAMMER
Abrupt changes in the flow condition, such as closing or opening of the valve, and starting or
stopping of the pumps, are unavoidable situations in hydraulic systems, and are ones which
cause water hammer. The present water hammer theory is based on the Joukowsky formula,
derived in the second half of 19th century. As per this formula, a change in pressure depends
on a change in velocity, mass density of the fluid and the velocity of sound in the fluid. Water
hammer equations include the continuity equation and the equation of motion. Streeter and
Wylie (1967) introduced the Method of Characteristics (MOC) to solve these equations numer-
ically. Chaudhry and Hussaini (1985) presented three explicit finite difference schemes: Mac-
Cormac, Lambda and Gabutti for the analysis of transient flow through pipes. All these three
schemes are second-order accurate both in space and time and have a predictor step and a
corrector step. They conducted numerical analyses for studying the effect of courant number
on the effectiveness of the methods, and found that when the courant number becomes one,
there is no advantage for second-order accurate schemes over first order accurate schemes.
Hadj-Taıeb and Lili (2000) presented another mathematical formulation for the analysis of
gas-liquid mixture in deformable pipes during water hammer. Two different methods were used
for the analysis: MOC and the finite difference conservative method. The model developed was
101
3 CAVITATION
Water hammer waves cause an alternating pressure rise/drop in a pipe system. When low pres-
sure prevails for a long time, the system is subjected to cavitation. Research on cavitation
covers a very wide area and includes gaseous cavitation, vaporous cavitation and column sepa-
ration. Among these, transient vaporous cavitation is mainly looked at in this study. Transient
vaporous cavitation is the formation, growth and collapse of vapor bubbles in a flowing liquid
in a closed conduit, in a region where, the pressure of the liquid drops below its vapor pres-
sure. Simpson and Bergant (1994a) summarized the experimental investigation in the field of
water hammer with column separation. Experiments were conducted in a copper pipeline with
two flow visualization polycarbonate blocks. Data acquisition systems consisted of pressure
transducers, a velocity measurement system and temperature measurement using a hot film
thermal probe. A computer data acquisition system was used for continuous recording and
storing of data. Simpson and Bergant (1994b) compared six cavity models including the Dis-
crete Vapor Cavity Model (DVCM) and the Discrete Gas Cavity Model (DGCM). The effect
of input parameters, such as wave speed, friction factor, initial velocity, pipe diameter and pipe
length, on the maximum pressure rise was also included in this investigation.
An analytical study on transient vaporous cavitation was conducted by Singhal et al.
(2002). In order to compute the change in fluid property during phase transition the vapour
transport equation was considered. The vapour mass fraction is governed by the transport
equation. The bubble dynamics expressions and phase change rate expressions were also
derived in this study. Shu (2003) developed a two-phase homogeneous equilibrium vaporous
cavitation model and compared it with the conventional column separation model in which
bubble dynamics was not a factor. Mass transfer during cavitation was ignored in this model,
but frequency dependent friction was included. The developed model predicted the pressure
peaks and dissipation more accurately.
A historical review by Bergant et al. (2006) included a detailed survey of the numerical
models and experimental work conducted in the field of water hammer with column sepa-
ration during the 20th century. Urbanowicz and Zarzycki (2008) compared four cavitation
models for transient cavitating flow in pipes with the experimental results. The models con-
sidered were the column separation model, Adamkowski’s model, the gas vapor cavitation
model and the bubble cavitation model. Among these four models, the column separation
model, Adamkowski’s model and the bubble cavitation model were very good in the simula-
tion of cavitation, and Adamkowski’s model found to be the best model among the three
models which correctly simulates pressure amplitude.
Adamkowski and Lewandowski (2009) developed a new DVCM and compared it with
the previous DVCM model. In the new model, MOC was used for analysis by incorporat-
ing unsteady friction. Experiments were conducted in a copper pipeline with two visualiz-
ing segments made of plexi-glass and it was found that vaporous cavitation was distributed
along the pipeline, with maximum concentration close to the valve, which decreased with an
increase in distance from the valve. Sumam et al. (2010) presented an alternate approach for
modeling transient vaporous cavitation. Cavitating flow was simulated using continuity and
momentum equations for water vapour mixtures, and the transport equation was used for the
vapour phase. The MacCormack scheme was used for developing the model. Model results
matched well with the published and experimental results.
In conventional water hammer analysis, the effect of pipe material was considered by includ-
ing pipe-wall elasticity. For rigid pipes, this leads to acceptable results, but for flexible
103
5 VISCOELASTICITY
Viscoelasticity is the property of materials that exhibit both viscous and elastic character-
istics during deformation. Under applied load, these materials exhibit instantaneous elastic
strain followed by gradual retarded strain. Most of the polymers are viscoelastic in nature,
but in the hydraulic transient analysis, this factor is neglected by most of the researchers.
Viscoelastic behavior of pipe material causes damping of the pressure wave and increases
the rate of energy dissipation. Covas et al. (2004, 2005) presented a mathematical model
with a viscoelastic pipe system and experimentally validated for hydraulic transients. Experi-
ments were conducted in a polyethylene pipe system with data acquisition systems, such as an
acquisition board, pressure transducers, strain gage and a computer. Mathematical models
included unsteady friction and pipe-wall viscoelasticity. For modeling unsteady friction, head
loss was decomposed to a steady state component and unsteady state component. The linear
viscoelasticity model was used for modeling viscoelasticity, which included instantaneous
strain and gradual retarded strain.
Soares et al. (2008) conducted an experimental and numerical analysis of a PVC pipe dur-
ing hydraulic transient. A hydraulic transient solver had been developed for PVC pipes, in
which MOC was used for solving the governing equations. The Kelvin-Voigt mechanical
105
6 CONCLUSION
There are various operational problems in pipe systems that cause water hammer, cavitation,
FSI and energy dissipation due to viscoelasticity. The review of the individual cases in this
study helps in the understanding of each of them separately, so that they can be considered
together in the analysis. To analyze fluid structure interaction in transient flow, the 4-equation
model considered two fluid equations (continuity and momentum equations) and two struc-
ture equations (for axial and flexural motion of the pipe). Similarly, for modeling transient
vaporous cavitation, the vapour transport equation is most widely used. However, for solv-
ing the governing equations of water hammer, cavitation, fluid structure and interaction the
finite difference method is used for fluid equations and the FEM is used for structure equa-
tions. From this review it was found that a combination of finite difference method and FEM
can be used effectively for the analysis of FSI in transient cavitating flow through pipes.
REFERENCES
Adamkowski A. (2001). Case study: Lapino power plant penstock failure Journal of Hydraulic Engi-
neering, 127(7), 547–555.
Adamkowski A. & M. Lewandowski (2009, October). Improved numerical modelling of hydraulic tran-
sients in pipelines with column separation. In 3rd IAHR International Meeting of the Workgroup on
Cavitation and Dynamic Problems in Hydraulic Machinery and Systems (pp. 419–431).
Ahmad A. & R.K. Ali (2008, October). Investigation of the junction coupling due to various types of
the discrete points in a piping system. in the 12th International Conference of International Associa-
tion for Computer Methods and Advances in Geomechanics (IACMAG).
Abuiziah, I., Oulhaj, A., Sebari, K., Abassi Saber, A., Ouazar, D. & Shakameh, N. (2013). Modeling
and controlling flow transient in pipeline systems: Applied for reservoir and pump systems combined
with simple surge tank. Revue Marocaine desSciences Agronomiques et Vétérinaires, 1(3),12–18.
Amara, L., Berreksi, A. & Achour, B. (2013). Adapted MacCormack finite-differences scheme for water
hammer simulation.Journal of Civil Engineering and Science, 2(4), 226–233.
Bergant, A., Simpson, A.R. & Tijsseling, A.S. (2006). Water hammer with column separation: A histori-
cal review. Journal of Fluids and Structures, 22(2), 135–171.
Borga, A., Ramos, H., Covas, D., Dudlik, A. & Neuhaus, T. (2004). Dynamic effects of transient flows
with cavitation in pipe systems. The Practical Application of Surge Analysis for Design and Operation–
9thInternational Conference on Pressure Surges (Vol. 2, pp. 605–617). Bedfordshire, UK: BHR Group
Limited.
Brunone, B., Karney, B.W., Mecarelli, M. & Ferrante, M. (2000). Velocity profiles and unsteady pipe
friction in transient flow. Journal of Water Resources Planning and Management, 126(4), 236–244.
Chaudhry, M.H. & Hussaini, M.Y. (1985). Second-order accurate explicit finite-difference schemes for
water hammer analysis. Journal of Fluids Engineering, 107(4), 523–529.
106
107
108
ABSTRACT Study investigates the variation of the roughness values with flow depth,
channel slope and geometry. Four types of experimental setup were prepared in the labo-
ratory for the study, straight and curved channels with straight and trapezoidal cross sec-
tions. Variation of Mannings n and Chezys C with geometrical features and flow depth were
studied. It is found that the roughness values varies significantly in case of curved channels
and therefore prediction of discharge using this conventional roughness constants created-
significant error in discharge estimation. Dimensional analysis is carried out to incorporate
certain geometrical and flow parameters to the roughness equation. Roughness equations
(Mannings n and Chezys C) were modified and validated with the stage discharge data of
Vamanapuram river. It is found that modified equation gives more accurate discharge values
in erodible bed channel flows.
1 INTRODUCTION
Usual practice in one dimensional flow analysis is to select an appropriate value of rough-
ness coefficients for evaluating the discharge in open channel flow. This value of roughness
is often taken as uniform for entire cross section and entire surface and for all flow depths.
Researchers have studied the phenomenon of flow mostly in straight channels and have pro-
posed a number of methods to quantify the discharge. However these methods give better
results for straight channels but not for curved channels, because of formation of secondary
vortex flows in curve bends.
Under steady and uniform flow conditions, the equations proposed by Chezy or Manning
(1891) are used to compute the mean velocity of a channel.
where, R = the hydraulic mean radius of the channel section, n = Manning’s roughness coef-
ficient, and C = Chezy’s channel coefficient, S = bed slope
Jarrett (1984) developed a model to determine Manning’s n for natural high gradient chan-
nels having stable bed and in bank flow without meandering coefficient. He proposed, a value
for Manning’s n as
This equation suggested by Jarret can be used for a channel with hydraulic radii from
0.15–2.1 m and gradient ranging from 0.002–0.04. Al-Romaih et al. (1999) reported the effect
of bed slope and sinuosity on discharge of two stage meandering channel. Based on dimen-
sional analysis, an equation for the conveyance capacity was derived
A series of experiments were conducted on compound channel in straight reaches under
isolated and interacting conditions by Pang (1998). He found that the Mannings n value var-
109
2 METHODOLOGY
110
water with time was measured by stopwatch. A hand-operated tailgate weir was constructed
at the downstream end of the channel to regulate and maintain the desired depth of flow in
the channel. Readings were taken for the different slopes, discharges and aspect ratio. Around
44 readings were taken for the analysis from the four channel types.
111
Table 1. Details of various parameters in straight channel (TYPE I and TYPE II).
I 13.33 0.010 0.398 0.039 0.0126 46.2 II 15.00 0.003 0.440 0.0357 0.0107 53.41
9.09 0.019 0.494 0.054 0.0126 48.8 8.82 0.018 0.560 0.0570 0.0115 53.82
7.06 0.028 0.566 0.066 0.0125 50.5 8.00 0.023 0.575 0.0619 0.0119 53.00
6.00 0.036 0.615 0.075 0.0125 51.5 7.59 0.028 0.590 0.0647 0.0119 53.20
5.00 0.048 0.673 0.087 0.0125 52.7 6.52 0.033 0.630 0.0735 0.0121 53.29
Table 2. Details of various parameters in curved channel (TYPE III and TYPE IV).
III 16.00 0.005 0.480 0.032 0.0092 61.55 IV 20.00 0.0045 0.338 0.022 0.010 51.36
12.80 0.007 0.500 0.038 0.0099 58.77 15.15 0.0065 0.365 0.029 0.011 48.84
11.64 0.008 0.500 0.040 0.0104 56.69 11.11 0.0099 0.395 0.038 0.013 46.02
10.67 0.009 0.504 0.043 0.0107 55.35 10.00 0.0115 0.409 0.042 0.013 45.48
9.85 0.010 0.498 0.045 0.0112 53.43 7.69 0.0165 0.436 0.053 0.014 43.37
112
SL Roughness Standard
no Coefficients Mean deviation
with more accuracy than Chezys C. Variation of these roughness coefficients points towards
the necessity of modifying these equations.
The sinuosity ratio is inversely proportional to velocity, slope and aspect ratio. Hence the
eqn. (5) may be rewritten in the following form as
103
UR 1 gR
= ϕ , S , ∝, (8)
v Sr v2m 3
1
113
1
URSr gR10 /3s 2
vs 1
(9)
v∝ v 2 m 3
The newly developed relationship for Manning’s n constantin open channel flow is
UR 1 gR3
= ϕ , S , ∝, 2 (12)
v Sr v
The newly developed relationship for Chezys constant C in open channel is expressed as
0.0007 g 0.8421R1.04 ∝
C= (14)
s 0.079v 0.684Sr
10
URSr
Figure 8. Calibration equation between ( gR 3
) and v∝ for Mannings n.
v2 m 1
3
114
10
3
Figure 9. Calibration equation between ( gR ) and URSr
v∝
for Chezys C.
v2 m 1
3
Figure 10. Variation of observed and modeled discharge using Mannings n for all channel types.
Figure 11. Variation of observed and modeled discharge using Chezys C for all channel types.
115
4 CONCLUSIONS
ACKNOWLEDGMENT
The authors gratefully acknowledge the contributions from the department of Civil Engi-
neering, College of Engineering Trivandrum in providing the necessary infrastructure for
carrying out the study.
REFERENCES
Al-Romaih, Shiono K. and Knight D.W. (1999). “Stage Discharge Assessment in Compound Meander-
ing Channels”, Journal of Hydraulic Engineering. ASCE, vol.125, No. 1, pp. 66–77.
Guoliang Yu and Lim S.Y. (2003). “Modified Manning formula for flow in alluvial channels with sand-
beds”, Journal of hydraulic research, Volume 41, 2003 – issue 6, pp. 597–608.
Jarrett R.D. (1984). ‘‘Hydraulics of high gradient streams’’, Journal of Hydraulics Engineering, ASCE,
110, pp. 1519–1539.
Pang B. (1998). “River flood flow and its energy loss”, J. Hydr.Engrg., ASCE, 124(2), 228–231.
116
ABSTRACT: Surge tanks are designed to protect a hydropower plant against the pressure
surges caused by sudden load changes. The design parameters of a surge tank are significantly
influenced by the pressure surges developed during valve operations. In this study, the pressure
variations during valve closure are taken to calculate the corresponding water level fluctuations
in the surge tank using computational fluid dynamics. These fluctuations are again computed
by incorporating fluid–structure interaction, which accounts for the damping of pressure surges
caused by the vibration of piping components. On comparing the results obtained from numeri-
cal analysis with and without considering fluid–structure interaction, it was found that the water
level fluctuations are suppressed and the system stabilizes quickly, considering the effect of fluid–
structure interaction. Hence, the size of the surge tank, which directly depends on the magnitude
of pressure surges during transient flow, can be reduced by considering the effect of fluid–struc-
ture interaction. Therefore, this paper presents an economic design of a surge tank by reducing its
diameter due to the incorporation of fluid–structure interaction in the design.
Keywords: fluid–structure interaction, computational fluid dynamics, pipe flow, surge con-
trol, transient analysis
1 INTRODUCTION
A water hammer is defined as a pressure surge caused when fluid motion is forced to stop sud-
denly. Due to the effect caused by water hammer, a pressure wave is created which travels from
the point of generation through the piping system. This motion will disturb the entire piping
system. The intensity of this pressure wave is such that it sometimes leads to the complete
failure of the system. There are many methods that can be used to control the undesirable
transients in pipeline systems and to reduce their negative effects. One such popular method,
used in hydropower plants, is the installation of a surge tank. The purpose of the surge tank is
to intercept and dampen these high pressure waves and not allow them to drop toward vapor
pressure. In the present study, the damping effect of a surge tank is studied and analyzed.
A large amount of research has been conducted to study the damping behavior in surge tanks.
The height of a surge tank is decided based on the highest possible water level during the transient
state, such as at the time of sudden valve closure. Traditional methods for deciding the size of a
surge tank are based on the magnitude of the pressure surge obtained from transient analysis.
Research by Moghaddam (2004) provided analytical equations for the analysis and design of a
simple surge tank. Equations for maximum and minimum surge and its time of occurance were
also determined. Another set of numerical equations were developed by Nabi et al. (2011) and
Abuiziah et al. (2014) using the Runge-Kutta method. However, using such equations requires con-
siderable computational time and effort. Other methods developed include a user friendly chart by
Seck and Fuamba (2015) and software developed for modeling the transient phenomena in a surge
tank. One of the popular software products is computational fluid dynamics. Recently, reserachers
like Lan-Lan et al. (2013) and Torlak et al. (2015) used computaional fluid dynamics for modeling
transient phenomena. Both considered two phase flow in the surge tank and used a volume of fluid
approach and K-ε turbulance model. From the study by Torlak et al. (2015), it was found that the
result from computational fluid dynamics is close to the real scale situation. Hence, in the present
117
The study area selected for the transient analysis is Pallivasal hydroelectric power plant which
is located in Munnar in the Idukki District. The Munnar region is situated at an altitude of
1500–2500 m above sea level and has an average rainfall of 275 cm. The hydraulic system
consists of a head race tunnel of 3396 m long and a pressure shaft of 2500 mm diameter. The
head race tunnel and pressure shaft are common to both powerhouses. The pressure shaft is
bifurcated into two penstocks of 2000 mm and 1600 mm diameter to feed the extension and
existing power houses. These penstocks are further divided in such a way that it has two num-
bers of 1500 mm diameter for the extension scheme and three numbers of 460 mm diameter
and three numbers of 583 mm diameter for the existing generating scheme. Figure 1 shows
the layout of Pallivasal hydroelectric power plant.
3 COMPUTATIONAL MODEL
The geometric model includes a tank, that is a surge tank having a 7 m diameter and two
pipes; the head race tunnel having a 3.5 m diameter and the penstock having a 2.5 m diameter.
118
After attaining a steady state flow condition, the transient condition generated in the surge
tank is taken from the work of Shima (2013) and shown in Figure 3 and Figure 4.
The pressure values in Figure 3 and Figure 4 were extracted using Grabit software in Mat-
lab. Due to the modeling difficulty of the entire geometry of the hydraulic system in Pallivasal
hydropower plant in Fluent, the boundary condition applied at the outlet during transient
analysis is slightly modified based on the target. The values given in Figure 3 and Figure 4 are
the pressure values at the turbine ends of two units. Since the entire piping system is not mod-
elled for getting the boundary conditions, it is necessary to reduce these values while applying
on the given geometries outlet. Initially, the transient analysis was carried out for the fluid
model. The obtained water level oscillations in the pressure surges at Unit 1 and at Unit 6 are
given in Figure 5 and Figure 6. It was found that the water level slightly dampened toward the
end of simulation.
i. Surge oscillations in the surge tank due to load rejection at Unit 1
From Figure 5, the maximum water level during analysis was found to be 42 m and the mini-
mum water level was 15 m. The oscillation continues with a reduction in water level for more
than three seconds.
ii. Surge oscillations in the surge tank due to load rejection at Unit 6
From Figure 6, the maximum water level during transient condition was observed as
43 m and the minimum water level was 14 m. The oscillation continued for more than
two seconds.
119
Transient analysis was again carried out for the same surge tank, for the same condition
incorporating the effect of FSI, which considers the coupling mechanism between the fluid
and the structure using the principles of system coupling. The simulated water levels at the
surge tank for Units 1 and 6 are represented in Figure 7 and Figure 8.
Figure 7 shows a graph between head and time for the surge tank at the time of transient
analysis for Unit 1. It was observed that the maximum water level was 38 m and the minimum
water level was 14 m. The water level dampened completely at 0.9 seconds.
Figure 8 shows the water level variation in the surge tank while considering the transient
condition at Unit 6 with the incorporation of fluid–structure interaction. It was found that
the maximum water level was 38 m and the minimum water level was 15 m. The pressure wave
dampened completely at 0.8 seconds.
6 COMPARISON OF RESULTS
After completing the fluid and FSI analyses for the two units during the transient condi-
tion, the next step was to compare and analyze the results. Table 1 shows the variation of
results in both units.
The water level fluctuations obtained from the FSI model were then compared with the fluid
model and the results are depicted in Figure 9 and Figure 10. It was found that the FSI model
had less peak compared to the fluid model. However, the lag between the curves increases dras-
tically as the wave progresses. It can also be seen that the FSI model is subject to more damp-
ing compared to the fluid model. FSI predicts surge tank oscillations with additional energy
Figure 7. Water level variation in the surge tank Figure 8. Water level variation in surge tank
considering FSI (Unit 1). considering FSI (Unit 6).
Table 1. Comparison of water level of oscillation in the surge tank with and without FSI.
Analysis
Unit 1 0.9 s 38 m 14 m 3s 42 m 15 m
Unit 6 0.8 s 38 m 15 m 2s 43 m 14 m
Figure 9. Comparison of head result based on Figure 10. Comparison of head result based on
Unit 1. Unit 6.
120
dissipation. Hence, the design size of the surge tank can be reduced, because in earlier case
design considered the effect of fluid only. For a given load change, the surge amplitude in a
simple chamber is approximately proportional to the diameter of the chamber, if the cham-
ber is big enough, the surge becomes “deadbeat” and will die away after the half cycle. A simi-
lar deadbeat condition will result if the load change is sufficiently slow. Deadbeat chambers
are not usually economical (Nabi et al., 2011). Hence, the design size of the surge tank can be
reduced, as selected earlier by transient analysis (fluid model).
From the above result one can say that the FSI dampens rapidly and the head developed in
the surge tank with FSI is less compared to the fluid model. Therefore, in order to avoid the
deadbeat condition in a surge tank for a large magnitude pressure wave, it is necessary to find
the performance of a surge tank based on a reduced diameter. Hence, a surge tank of reduced
diameter (i.e. 6 m) was selected and the procedure repeated as above
8 COMPARISON OF RESULT
Comparing the water levels in the surge tank with the change in size, it was observed that by
reducing the diameter, the maximum water level will go up to 42.5 m and the minimum water
level go down to 14 m. However, it is still a safe version.
While observing the result from the two analyses of reducing diameter, it was observed
that the FSI model gives a small water level fluctuation compared to the fluid model, which
developed a high head in the surge tank during the transient condition. The comparison of
water level fluctuation in the surge tank for fluid and FSI cases is given in Figure 13, which
clearly indicates the damping mechanism during fluid–structure interaction.
121
Water level
7m 0.8 s 38 m 15 m 2s 43 m 14 m
6m 0.6 s 42.5 m 14 m 1s 44 m 14 m
Figure 13. Comparison of water level fluctuation in the surge tank in fluid and FSI cases.
9 CONCLUSIONS
Generally, the size of the surge tank is decided based on the magnitude of the pressure surge
obtained from transient analysis which does not consider the damping effect caused by the
fluid–structure interaction.
Based on this study, the following conclusions were made:
• The design size of the surge tank, which directly depends upon the magnitude of the pres-
sure surge, can be reduced without compromising performance.
• System stabilization occurs at an early stage when FSI is considered.
• The cost of construction of the surge tank can be minimized by obtaining the optimum
size of surge tank, hence design economy can be obtained.
REFERENCES
Abuiziah, I., Ouihaj, A., Sebari, K. & Ouazar, D. (2014). Comparative study on status and development
of transient flow analysis including simple surge tank. International Science Index, Civil and Environ-
mental Engineering, 8, 228–234.
Hou, G., Wang, J. & Layton, A. (2012). Numerical methods for fluid structure interaction—A review.
Communications in Computational Physics. 12(2), 337–377.
Lan-Lan., Zheng-Ghang., Jie., Dong & Guang-sheng. (2013). Numerical study of flow fluctuation
attenuation performance of a surge tank. Journal of Hydrodynamics, 6, 938–943.
Moghaddam, A. (2004). Analysis and design of asimple surge tank. International Journal of Engineering
Transactions A: Basics, 17, 339–345.
Nabi, G., Rehman, H., Kashif, M. & Tariq, M. (2011). Hydraulic transient analysis of surge tanks: Case
study of Satpara and Golen Gol hydro power projects in Pakistan. Pakistan Journal of Engineering
and Applied Sciences, 8, 34–48.
Ramdan, A & Mustafa, H. (2013). Surge tank design considerations for controlling water hammer
effects athydroelectric power plant. University Bulletin, 3, 147–160.
Sanchez, H., Cortes, C. & Matias, A. (2004). Structural behavior of liquid filled storage tanks of large
capacity placed in seismic zones of high risk in Mexico. 13th World Conference on Earthquake Engi-
neering, Vancouver B.C., Canada, 2665.
Seck, A. & Fuamba, M. (2015). Contribution to the analytical equation resolution using charts for
analysis and design of cylindrical and conical open surge tank. Journal of Water Resource and Pro-
tection, 7, 1242–1256.
Shima, S.M.S. (2013). Optimization of surge control in hydroelectric power plant. (Project report of
Govt. Engineering College, Thrissur).
Torlak, M., Bubalo, A. & Dzafervoic, E. (2015). Numerical analysis of water elevation in a hydropower
plant surge tank, International Association for Hydro-Environment Engineering and Research, 9, 9–11.
122
ABSTRACT: This work aims to investigate the degradation characteristics of coir geo-
textiles in the presence of hydrocarbon, which is essential if the geotextiles are meant to be
used as a remedial measure against hydrocarbon contamination of soil. Several studies have
already reported on degradation of coir geotextiles for various civil engineering applications.
However, its degradation behavior in a hydrocarbon contaminated environment has not been
studied yet. In this study, experimental tests were conducted to explore the durability of coir
geotextiles with the effect of hydrocarbons. The experiment was carried out for one year to
investigate the time-dependent degradation behavior. The results show that the degradation
rate is within the normal range of values compared with previous studies. Hence, the degra-
dation rate of coir geotextiles is not much influenced by the action of hydrocarbons.
1 INTRODUCTION
Natural fibers are now a keen area of interest in many applications as they are recyclable and
readily available. Because of their availability and biodegradability, natural fibers can be used
to replace synthetic polymers in various applications. However, it is an inevitable fact that
the strength of natural fibers is continuously reduced on interaction with wet soil or water, or
sometimes under the action of micro-organisms or due to chemical reactions. This decaying
process is termed biodegradation, an important factor that needs to be considered before
these materials can be put to any engineering use (Joy et al., 2011).
Coir is a hard and tough organic fiber extracted from the husk of the coconut. Sri Lanka
and India are the major fiber producers in the world. About 55 billion coconuts are harvested
annually in the world, of which, only 15% of the husks are used for extracting fibers and the
rest abandoned in nature, hence becoming a cause of environmental pollution (Praveen et al.,
2008). Coir fiber is rich in cellulose (36 to 43% by weight) and lignin (41 to 45% by weight);
hence, it is regarded as an environmental friendly material. It also contains constituents such
as hemicellulose (0.15 to 0.25 wt%), protein (3 to 4 wt%) and ash (1 to 2 wt%) (Yao et al.,
2012). Due to its high lignin content, compared to other natural fibers such as jute, flax, linen
and cotton, coir is known to be the strongest of all natural fibers. From a study conducted
by Mukkulath and Thampi (2012), it is clear that the degradation of jute geotextile is faster
(within one year) than that of coir geotextile (eight to ten years). Coir fibers take 15 times
longer than cotton and seven times longer than jute to degrade (Rao & Balan, 2000). The
coir geotextile made of these coir fibers has already been proven as a material with excellent
qualities in various engineering applications. Most of the studies conducted in our country
show that coir geotextiles are preferable to other natural fibers for engineering applications
as they are readily available, cost-effective and environmentally friendly (Praveen et al., 2008;
Rao & Balan, 2000; Mukkulath & Thampi, 2012).
The advantages of natural lignocellulosic fibers also include acceptable specific strength
properties, low cost, low density and biodegradability (Rajan & Abraham, 2007). Praveen
et al. (2008) present a viable and cost-effective technology using coir geotextiles for the removal
of organic matter from wastewater. Mukkulath and Thampi (2012) have conducted a pilot
123
124
In this study, non-woven coir geotextiles of 600 GSM and 900 GSM, size 20 cm x 20 cm were
used for the degradation study. This was supplied by the Coir Board, Thrissur, Kerala, India.
The degradation behavior of the samples was assessed by embedding them in wet lateritic soil
(pH 6) mixed with petrol (hydrocarbon) for 12 months. Table 1 shows the properties of the
materials used for the experiment.
The soil was collected from one meter below ground level and sun-dried for two days.
Lumps and debris were also removed. For setting the samples in one tray, 10 kg of clean soil
was collected, a small quantity of water was sprinkled over it and uniformly mixed with pet-
rol (0.5 liter). The evenly mixed soil was filled as two layers, and the coir samples were placed
in between the first and second layers. The whole tray was then covered with a plastic sheet
to avoid quick evaporation and holes were provided for aeration. Figure 1 shows the steps
involved in placing the samples. The rate of degradation was obtained by conducting tensile
strength tests on coir geotextile samples after treating them for one month, three months, six
months, nine months and 12 months.
The tensile strength test is usually performed to determine the durability/degradation effect of
coir geotextiles, following the procedure outlined in IS 13162 (Part 5) and ASTM D 4595. The
tests were conducted at the Central Coir Research Institute (CCRI) in Alleppey, Kerala, India.
The test was conducted as per the procedure laid down in IS13162. The tensile properties of
the test specimens were obtained as tensile strength in kN/m, which denotes the maximum
resistance to deformation per unit width developed for a specific material when subjected to
tension by one external force. Three samples were tested for each trial, and the average value
was taken as the tensile strength of the specimen. The test setup is shown in Figure 2.
The tensile strength of coir geotextiles was calculated using the formula given in Equa-
tion 1 (IS: 13162 – Part 5).
af = Ff C (1)
Figure 2. Test setup for determining tensile strength of non-woven coir geotextiles.
125
The tensile strength behavior of non-woven CG (900 GSM and 600 GSM) were studied at
different time periods. The samples were embedded for one year to study the time-dependent
degradation behavior under the action of hydrocarbons. Tensile strength curves (Figure 3)
were obtained for both types of samples after treating them for one month, three months,
six months, nine months and 12 months, and the corresponding stress–strain behavior is
presented in Figure 4 and Figure 5.
The tensile strength of the geotextile is characterized by the observed maximum breaking
load. The strength corresponding to zero period indicates the strength of the untreated sam-
ple. The percentage strength reductions for each month are shown in Table 2.
In the period of three to six months, the percentage reductions in strength were observed
as about 52% and 72% for the 900 GSM and 600 GSM samples, respectively. In the case
of 900 GSM samples, a slight regain of strength was observed in the period of the sixth to
ninth month, but in month 12 it is reduced to 61.54%. In the case of 600 GSM samples, 90%
of strength reduction was observed in the ninth month, and a slight regain of about 18% in
strength noticed in the period of months 9–12.
The results obtained were compared with the degradation tests conducted by Rao and
Bala, (2000) on coir yarns embedded in different environments for one year. They observed
that the degradation rate of coir fibers mainly reflected the influence of pH of media and the
admixtures. Many studies highlight the significance of pH in the reduction of tensile strength
of coir geotextiles (Joy et al., 2011; Mukkulath & Thampi, 2012).
The pH value of the soil considered in this study is 6, which can cause around 66% of
degradation as per the results presented by Rao and Balan (2000). As the added substance,
in other words, hydrocarbon, was a non-polar compound (pH value is neutral), pH may not
have an influence on the results of this study. The observed overall strength reductions when
embedded in a hydrocarbon environment were about 61.54% for 900 GSM and 73.10% for
600 GSM samples, respectively. Hence, the degradation rate of geotextiles is not much influ-
enced by the action of hydrocarbons.
From the literature, it is also clear that cellulosic fibers possess the capacity for absorb-
ing oil content due to their high specific area (Hubbe et al., 2013; Karan et al., 2011). The
strength reduction in the case of geotextile samples pertain to these justifications. Payne et al.
(2012) reveal the ability of cellulosic fibers to absorb oil even in wet conditions. Within a
126
period of one year, some portion of hydrocarbons may get absorbed into the coir fiber due to
its high specific area. Subsequently, the fibers become brittle which may lead to lower tensile
strength. Hence, absorption capacity studies and morphological analysis are beneficial for
identifying their suitability against hydrocarbon contamination.
127
This degradation study on non-woven coir geotextile media was intended to focus on the
application of natural fibers against hydrocarbon contamination. The results show that the
degradation rate under the action of hydrocarbon is almost same as the degradation rate
under other environmental conditions. Hence, the degradation rate of coir geotextiles is not
much influenced by the presence of hydrocarbons. The result also reveals the necessity of
further experimental studies for the proper understanding of the variation in their structure,
physical and chemical compositions. This can be considered as the future scope.
REFERENCES
ASTM D−4595. (2009). ASTM D4595–11 Standard Test Method for Tensile Properties of Geotextiles by
the Wide-Width Strip Method. West Conshohocken, USA: ASTM International.
Hubbe, M.A., Rojas, O.J., Fingas, M. & Gupta, B.S. (2013). Cellulosic substrates for removal of pollut-
ants from aqueous systems: A review. 3. Spilled oil and emulsified organic liquids. BioResources, 8(2),
3038–3097.
IS 13162: Part 5 (1992). Determination of tensile properties using a wide width strip. Bureau of Indian
Standards, New Delhi, India.
Joy, S., Balan, K. & Jayasree, P.K. (2011). Biodegradation of coir geotextile in tropical climatic condi-
tions. Proceedings of the Golden Jubilee Indian Geotechnical Conference, Kochi, India, 604–606.
Karan, C.P., Rengasamy, R.S. & Das, D. (2011). Oil spill cleanup by structured fibre assembly. Indian
Journal of Fibre & Textile Research, 36(2), 190–200.
Mukkulath, G. & Thampi, S.G. (2012). Performance of coir geotextiles as attached media in biofilters
for nutrient removal. International Journal of Environmental Sciences, 3(2), 784–794.
Payne, K.C., Jackson, C.D., Aizpurua, C.E., Rojas, O.J. & Hubbe, M.A. (2012). Oil spills abatement:
Factors affecting oil uptake by cellulosic fibers. Environmental Science and Technology, 46(14),
7725–7730.
Praveen, A., Sreelakshmy, P.B. & Gopan, M. (2008). Coir geotextile-packed conduits for the removal of
biodegradable matter from wastewater. Current Science, 95(5), 655–658.
Rajan, A. & Abraham, T.E. (2007). Coir fiber–process and opportunities: Part 1. Journal of Natural
Fibers, 3(4), 29–41.
Rao, G.V & Balan, K. (Eds), (2000). Coir geotextiles—emerging trends. Alappuzha, Kerala: The Kerala
State Coir Corporation Ltd (publishers).
Schurholz, H. (1991). Use of woven coir geotextiles in Europe. Coir, 35(2), 18–25.
128
ABSTRACT: A harbor is a water area partially enclosed by breakwaters and thus pro-
tected from storm-generated waves and currents. It also provides safe and suitable accommo-
dation for vessels seeking refuge, supplies, refueling, repairs, and transfer of cargo. Because
of these protective structures, only small local waves or generated waves are active, but in
a weakened form. The discharge entering into the port basin depends on tidal streams and
weather conditions. In the near-bottom layer of a port, the water movement greatly depends
on the discharge energy in the upper layers, as well as on the changes in the water flow caused
by roughness. These water flows are mainly connected with the displacement speed of the
sediments, and may adversely affect the smooth operations of the port. Hence a thorough
assessment of hydrodynamics is necessary to evaluate the effect of a port development on the
prevailing currents. The proposed port of Vizhinjam, India, was studied in this paper using
the Delft3D-FLOW module for the formulation and modeling of hydrodynamic behavior.
This study mainly analyzed the pattern of water flux entering and leaving the basin area in
different seasons; this information can be used to help in the future planning of the port. The
study results indicated that the breakwater considerably reduced the amount of water in and
out of the port in any situation, and also the wave height within the harbor area.
1 INTRODUCTION
Kerala in India, lies between the Arabian Sea in the west and the Western Ghats in the east. Ker-
ala’s coast runs 580 km in length, while the width of the state varies between 35–120 km. Large
stretches of this coast are subjected to repeated erosion yearly during the monsoon period.
Additionally, the coastal area is subjected to geological problems and natural processes, such as
coastal erosion, deposition, sedimentation, tsunami and tidal waves. Sediment transport, asso-
ciated with the movement of water, is the main reason for coastal erosion. Breakwaters are usu-
ally built parallel to the shore or at an angle to reduce serious wave action from its destructive
impact on the shoreline. Harbors are created with the help of breakwaters to provide a tranquil
environment, which in turn, blocks the natural movement of water and sediment.
The proposed port at Vizhinjam is located in in the state of Kerala,16 km south of the state
capital Thiruvananthapuram, which falls in proximity to the international east-west shipping
route. Vizhinjam port aims to berth large container ships and hence enhance India’s ability to
handle gateway and transshipment cargo, while establishing a strong supply chain network in
Kerala (Vizhinjam International Sea Port Limited, 2016). The proposed port has a breakwater
that will provide a tranquil environment within the harbor area. Wind and wave-induced flows
will play an important role as the basin is situated along the coast. Analysis of observed silta-
tion rates in various environmental conditions showed that harbor siltation in freshwater con-
ditions is much less than that in salt and brackish water conditions (Nasner, 1992); this may
be due to a high discharge entering the basin which is the combined effect of the tide, waves,
winds and the other physical parameters. Hence it becomes imperative to study the movement
of water within the harbored area so that its effect on sediment transport can also be studied.
129
2 METHODOLOGY
Delft3D is a unique, fully integrated computer software suite for a multidisciplinary approach.
Delft3D-FLOW solves the Navier Stokes equations for an incompressible fluid, under the shal-
low water and the Boussinesq assumptions. In the vertical momentum equation, the vertical accel-
erations are neglected, which leads to the hydrostatic pressure equation. In 3D models, the vertical
velocities are computed from the continuity equation. The set of partial differential equations,
in combination with an appropriate set of initial and boundary conditions, is solved on a finite
difference grid (Deltares, 2006). Initial water level above the reference plane is specified, uniform
initial conditions for the velocity component is set to zero, and the water level at the outlet is speci-
fied as boundary conditions. As far as the port of Vizhinjam is concerned, water flux entering and
leaving the port area is a major issue since it is planned to accommodate larger container vessels.
The bathymetry of the Vizhinjam port was conducted using Delft3D software. The master plan
layout (including seabed countor) of Vizhinjam port is as shown in Figure 1. The application
130
The flux entering the harbor basin was the first to be analyzed. At the entry cross section,
the discharge was found to be higher from June to July, and lower from February to May.
Figure 4 shows the discharge entering the harbor entry cross section. Q1 and QQ2 represent
quarter 1, quarter 2, respectively, and so on. The maximum discharge in the entry cross sec-
tion was 2.5 × 105 m3/s. The discharge entering the port varied with season. Figure 4 shows
the discharge entering the cross section near the entrance of the harbor. The periodic vari-
ation in dischargewas in accordance with the seasonal variations; the south-west monsoon
being the period in which the major proportion of rainfall occurs. Since no measured infor-
mation is available regarding the amount of discharge in this proposed project, the same was
compared with the available information from the study (Vizhinjam International Sea Port
Limited, 2016). It was found that the results were reasonably matching.
Figure 5 shows the water flux near the turning circle cross section. It can be seen that flow
entering the ship berth area was low compared to other areas. But outside the breakwater
(1.5 km away), the discharge was very high. The maximum and minimum for each case was:
2.2 × 105 m3/s versus 6 × 104 m3/s in the case of the maximum; and −1.2 × 105 m3/s to –3.8 × 105 m3/s
in the case of the minimum. This difference shows the effectiveness of the breakwater system.
Figure 6 shows the discharge across the cross section outside the breakwater. Here also,
the discharge (18 × 106 m3/s) was found to be greater from June to July. Across the cross sec-
tion of the approach channel, the discharge was greater from June to July (6 × 104 m3/s). The
comparison of Figure 6 and Figure 5 shows the role of the breakwater in the amount of
water flux entering the port. The discharge at the cross section inside the turning circle was
reduced to a greater extent.
131
In order to assess the effect of the breakwater on the tranquility of the inside area, the aver-
age wave height for each season at the turning circle cross section, and outside, was measured.
The results are presented in Table 1. It can be seen that the average wave height was reduced by
50% and hence effective in protecting the area. However, the wave height was less than 0.57 m,
which is acceptable for a harbor area
4 CONCLUSION
The hydrodynamic behavior of Vizhinjam port modeled using Delft3D, enabled this study to
evaluate the water flux that was occurring within and around the harbor area. The breakwa-
ter considerably reduced the amount of water in and out of the port in any situation, and also
reduced the wave height. The tranquility of the port area was assessed by estimating the wave
height inside the harbor, and the seasons were found to be of influence. Hence, the current
alignment can provide an environment suited for berthing large ships.
REFERENCES
Deltares. (2006). Delft 3D-FLOW manual. Simulation of multi-dimensional hydrodynamic flows and
transport phenomena, including sediments. Rotterdamseweg, The Netherlands: Author.
Nasner, H. (1992). Siltation in tidal harbours, part I. Die Küste, 127–170.
Vizhinjam International Sea Port Limited. (2016). Vizhinjam international multipurpose sea port project
report.: Author.
132
ABSTRACT: The safety of a road is closely linked to variations in the speed of vehicles
traveling on that road. Horizontal and vertical alignments of a highway are designed based
on an assumed design speed. Later researchers recognized that drivers select a speed influ-
enced by the roadway condition rather than an assumed design speed. This work attempts
to develop operating speed models for crest vertical curves on two-lane non-urban roads in
India. Traffic data, crash data for six years and geometric details of 55 sites formed the data-
base for the study. Scatter plot and correlation analyses were carried to identify the candidate
variables for developing the models to predict operating speed. Multiple linear regression
models were developed. The models developed in the study can be used for the evaluation of
the design of vertical curves and establishing speed limits on non-urban roads.
1 INTRODUCTION
Road crashes inflict heavy economic losses on a country. Safety on roads is necessary to reduce
crashes involving both humans and vehicles; subsequently making roads safer and user-friendly
for traffic. A good geometric design provides an appropriate level of mobility for drivers while
maintaining a high degree of safety. An inconsistency in design will lead to variations in speed.
Operating speed is the most common and simple measure of design consistency. It is charac-
terized as the speed chose by the driver when not discouraged by other users and is normally
represented by the 85th percentile speed, which is denoted as V85 (HSM, 2010).
The vertical grades or curvature of vertical curves of roadways are also related to road
safety. A vertical curve is used to avoid a sudden change of direction when moving from one
grade to another (Lamm et al., 1999). These authors found that a statistically significant
relationship exists between mean speed reductions and mean road traffic crashes; sites with
higher speed reductions showed higher crash rates. Thus, the change in vehicle speed is a
visible indicator of inconsistency in geometric design. Hence, this study attempts to develop
operating speeds on crest vertical curves on non-urban roads.
2 LITERATURE REVIEW
Most research in the field of road safety has focused on establishing relationships between speed,
geometric and operational factors of road segments. This is mainly accomplished by developing
operating speed models that predict the expected speed of a vehicle with specific attributes. Operat-
ing speed is a good indicator of the level of safety on a road segment (Masaeid et al., 1995; Abbas
et al., 2010). Design inconsistencies are quantified by computing the difference between the design
speed and operating speed on a single element or by the difference in operating speeds on two suc-
cessive elements. Generally, the greater the difference between the operating speed and the design
speed of the curve, the greater the design inconsistency and the risk of accidents on the curve.
Jacob et al. (2013) developed operating speed and consistency models for horizontal curves
on non-urban two-lane roads. Hassan (2003) concluded that the grade of the curve and sight
distance are the main concerns in the design of vertical curves. Jeesen et al. (2001) investigated
135
Vertical curves, which are provided to make a smooth and safe transition between two grades,
may or may not be symmetrical. They are parabolic and not circular like horizontal curves.
Vertical curves are of two types: crest vertical curves and sag vertical curves. The length of
the crest vertical should be enough to provide a safe stopping sight distance. Identifying the
proper grade and the safe passing sight distance are the main design criteria in analyzing the
safety aspects of the vertical curve.
The essential part of developing operating speed models for crest vertical curves on non-
urban roadways is data collection. Data required for this study are geometric data and traffic
data along the study sites. Traffic data such as volume count and speed are collected using a
portable infrared based traffic data logger, a user-friendly instrument. For each vertical curve,
traffic data are collected at the Summit Point (SP) and approach tangent or level section, as
shown in Figure 1. Traffic data were collected for 12 hours (6.00 am to 6.00 pm).
136
4 DATA ANALYSIS
Of the 55 data sites, data from 45 sites were used for the model calibration and ten for the
validation of the model.
137
At At At At
Parameters summit summit summit summit
The main objective of this work was to develop operating speed models that take into con-
sideration the geometric features of the road way. Multiple linear regressions are a statistical
methodology describing the relationship between an outcome and a set of explanatory vari-
ables. The form of the multiple linear regression models is given in Equation (1).
where, Yi is the operating speed at road section ‘i’; X1, X2 …… Xn are the explanatory traffic
and geometric variables and a0, a1, a2 …… an are regression coefficients. The regression models
were developed using statistical analysis software. Models were developed for operating speed
at two locations, the approach tangent and summit point for vertical curves. A number of
139
140
LMV HCV
The coefficients of determination (R2) varied from 0.43 to 0.86 and adjusted R2 (Ra2) from
0.31 to 0.84. All the selected models at the summit curve have F values higher than the
standard table values. Also, the coefficients of the predictor variables are significantly dif-
ferent from zero, as per the t test values at the 95% confidence level. The positive sign of the
141
This study summarizes the analysis and development of operating speed models for vertical
curves. The analysis was done to determine the variables that significantly influence speeds
of vehicles at vertical curves. Correlation and scatter plot analyses showed that curve length,
approach grade, ATL and sight distance are the variables that affect the operating speed at
limiting and summit points. Grade and curve length are the most important variables for
predicting the operating speed of all categories of vehicle at the summit curve. The operat-
ing speed models will be useful for geometric consistency evaluation of crest vertical curves.
ACKNOWLEDGMENTS
The authors sincerely thank the Center for Transportation Research, Department of Civil
Engineering, National Institute of Technology Calicut, a center of excellence setup under
the Frontier areas of Science and Technology (FAST) Scheme of the Ministry of Human
Resource Development (MHRD), Govt. of India for the support received.
REFERENCES
Abbas, S.K.S., Adnan, M.K. & Endut, I.R. (2010). Exploration of 85th percentile operating speed
model on crest vertical curve two-lane rural highways. Proceedings of Malaysian Universities Trans-
portation Research Forum and Conferences (MUTRFC). University Tenaga Nasional.
Hassan, Y. (2003). Improved design of vertical curves with sight distance profiles. In Transportation
Research Record (TRB) (1851, 13–24). Washington DC: National Research Council.
Highway safety manual, (2010), Washington DC American Association of State Highway and Trans-
portation Officials (AASHTO).
Jacob, A., Dhanya, R. & Anjaneyulu, M.V.L.R. (2013). Geometric design consistency of multiple hori-
zontal curves on two-lane rural highways. Procedia-Social and Behavioral Sciences, 104, 1068–1077.
Jessen, D.R.K.S., Schurr, P.T., McCoy, G. & Pesti, R.R. (2001). Operating speed prediction on crest
vertical curves of rural two-lane highways in Nebraska. In Transportation Research Record (TRB)
(1751, 67–75). Washington DC: National Research Council.
Lamm, R., Psarianos, B. & Mailaender, T. (1999). Highway design and traffic safety engineering hand-
book. Washington DC: Transportation Research Record, Transportation Research Board.
Masaeid, A.L.H.R., Hamed, M., Aboul, E.M. & Ghannam, A.G. (1995). Consistency of horizontal
alignment for different vehicle classes. In Transportation Research Record (1500, 178–183). Washing-
ton DC: Transportation Research Board.
142
ABSTRACT: Two-Lane Highways carry maximum inter and intra-state traffic in any
country. The Level of Service of the facility can either be based on speed or platoon. Out of
the seven LOS measures studied, two platoon-related measures were found to be more suit-
able for LOS analysis of two-lane highways, which are Number of Followers and Follower
Density. The NF and FD are modeled using flow in the same direction and flow in the oppos-
ing direction. It was found that the linear model fits well for the data.
Keywords: Level of service, Platoon, Average travel speed, Percent free flow speed, Number
of followers, Follower density
1 INTRODUCTION
Two-lane highways carry a major portion of passenger and freight transport in any country.
In India, more than 50% of National Highways are of two-lane category (MORTH, 2012).
Functions of National Highways range from carrying high speed inter-state and intra-state
traffic to providing access to most remote parts of the country. Hence, the quality of traf-
fic operation of two-lane highways is a matter of concern while considering the economic
development of the nation.
Two-Lane highways are high-speed single carriageway roads with two-lane, one each for the
use of traffic in each direction. Unlike any other facilities, the overtaking has to be carried out
through the lane meant for traffic in opposite direction. The overtaking opportunity is restricted
by the availability of the gaps in the opposing lane and sight distance. If there is no suitable over-
taking opportunity available in the opposing lane, the fast moving vehicle has to wait behind the
slow moving vehicle, till it gets a suitable opportunity to overtake. With the increase in traffic flow,
more vehicles join the queue. This moving queue, which is otherwise termed as ‘platoon’, causes
reduction of the quality of traffic operation of two-lane significantly. Hence the traffic operation
on a two-lane highway needs to be assessed using suitable Level of Service (LOS) measures.
2 BACKGROUND LITERATURE
Highway Capacity Manual, 2010 (HCM, 2010) published by Transportation Research Board,
USA has suggested that LOS measure should be easy to measure and easy to understand by
users. The quality of operation may be predicted using one or more measures depending on
the requirement. The measures are:
143
where, ATSd is the Average Travel Speed in the direction considered and FFS is the free—
flow speed. This can be compared with other sites. If the value is close to 1 it indicates a good
performance condition.
The aim of the research is to identify a suitable LOS measure, which best reflects the traffic
operations on two-lane highways.
The objectives of the work include identifying the factors influencing LOS of two-lane
highways, estimating and comparing various performance measures for the facility, check-
ing the suitability of the various performance measures and to suggest the best performance
measure which best reflect the actual field condition.
4.1 Methodology
The speed related measures estimated are Space Mean Speed, Standard deviation of Space Mean
Speed and Coefficient of Variation of Space Mean Speed. The platoon-related measures used
include Number of Followers, Percent Followers, Follower Density and Average Platoon Size.
Data is collected from 11 straight and level sections of various districts of Kerala. The data col-
lected include traffic volume, speed and time headway of individual vehicles. The traffic data is
collected using an infrared based automatic traffic logger and video camera. The various speed-
related measures such as Space Mean Speed, Standard deviation of Space Mean Speed and Coef-
ficient of Variation of Space Mean Speed and various platoon-related measures like Number of
Followers, Percent Followers and Follower Density are estimated and plotted against 5 minute
traffic flow rate. All the performance measures were estimated for all the 11 sections separately.
In order to study the variation of the LOS measures due to traffic factors, the sites are
selected from sections of uniform geometry. The uniform, level sections of roads of varying
length were selected from National Highways, State Highways and Major District Road in
Kozhikode, Malappuram, Thrissur and Palakkad.
The traffic data required for estimating various speed and platoon related measures was
collected at two-levels. One set of data was collected using video recording. This data was
used to identify the vehicles in platoons. Other set of data was collected using automatic traf-
fic logger. The data was collected on a clear weather day for a period of 10 to 12 hours from
6 am to 6 pm. The results of the first set of data were used to analyse the second set of data
and establish various relationships between them. Figure 1 shows the site selected for vide-
ographic survey (left) and the automatic traffic logger (right).
145
5 DATA ANALYSIS
146
it represents the traffic over a stretch. Similar to NF, FD also falls into three categories.
Aamong all the measures, NF and FD have best relation with flow rate, which means, NF
and FD can be used as the LOS measures for two-lane highways. Figure 4 show the relation-
ship between platoons related measures and flow rate.
Out of seven performance measures, which include four speed-related measures and three
platoon-related measures, FD and NF are found to have a good correlation with traffic flow.
The combined data from 11 locations fit trend lines which indicate the effect of factors other
than traffic on these measures.
147
The study is based on the traffic data from 11 straight and level sections of two-lane highway.
Four speed-related measures and three platoon-related measures were analysed by plotting
the same with respect to 5 minute flow rate. Out of the 6 measures, Number of Followers
(NF) and Follower Density (FD) are found to be more suitable measures for LOS analysis of
two-lane highways. Linear model found to fit well for the data.
The major cause for the reduction in the quality of operation of two-lane highways is the
forceful following of fast moving vehicles behind the slow moving vehicles due to the inabil-
ity to overtake. The ‘following’ feature is an important aspect on two-lane highway, which
decides the LOS. NF and FD are two measures which incorporate this particular aspect of
the traffic operation on two-lane highway. While NF considers the number of followers at a
point on the section of road, FD considers the number of followers per unit length of road.
The measures are found to be affected by the geometry too.
The main limitation in the study is the assumption made on ‘following’ criteria. It is
assumed that all vehicles have same ‘following’ pattern, and same threshold headway of
3.25 seconds. But, it is likely to vary depending on factors like vehicle class, driver and local-
ity. NF is a number. Comparison among the sites may not be possible as it depends on many
other factors as mentioned above.
ACKNOWLEDGEMENT
We are thankful to the Centre for Transportation Research, Department of Civil Engineering
for providing the necessary infrastructure for conducting the research work.
REFERENCES
Al-Kaisy, A. and Durbin, C. 2011. Platooning on two-lane two-way highways: an empirical investiga-
tion. Procedia—Social and Behavioral Sciences, 1(16), pp. 329–339.
Al-Kaisy, A. and Karjala, S. 2008. Indicators of performance on two—lane rural highways. Transporta-
tion Research Record, 2071, pp. 87–97.
Brilon, W. and Weiser, F. 2006. Two-lane rural highways: the German experience. Transportation
Research Record: Journal of the Transportation Research Board, No. 1988, Transportation Research
Board of the National Academies, Washington, D.C., 2006, pp. 38–47.
Hashim, I. and Abdel-Wahed, T.A. 2011. Evaluation of performance measures for rural two-lane roads
in Egypt. Alexandria Engineering Journal, No. 50, pp. 245–255.
MoRT&H (Ministry of Road Transport and Highways). 2012. Basic Road Statistics of India 2008–09,
2009–10 & 2010–11 Government of India, New Delhi, India.
Oregon Department of Transportation 2010. Modelling Performance Indicators on Two-Lane Rural
Highways: The Oregon Experience.
Penmetsa, P., Ghosh, I., and Chandra, S. 2015. Evaluation of performance measures for two-lane inter-
city highways under mixed traffic conditions, Journal of Transportation Engineering, 141(10), pp. 1–7.
Transportation Research Board, Highway capacity manual. 2010, USA. 2. Chapter 15.
148
ABSTRACT: The research on traffic safety on highways underlines the need for maintaining
consistency in the geometric design of highways. This paper focuses on driver workload and
how the geometry affects driver’s physiological characteristics. The work presented includes
development of a device namely, the Road Driver Data Acquisition System (RDDAS). It is
comprised of various sensors for capturing heart rate and galvanic skin resistance, and video
cameras for capturing eye blink rate. Drivers equipped with the RDDAS were allowed to
drive a vehicle with Global Positioning System through study stretches of known geometry.
The effect of geometry on driver workload was explored using a scatter plot study and cor-
relation analysis. The results indicate that heart rate and rate of eye blinking are very good
indicators of driver workload. The study could be extended to develop mathematical models
that can quantify the relationship between highway geometry and driver workload.
Keywords: driver workload, highway geometry, heart rate, galvanic skin resistance, rate of
eye blinking
1 INTRODUCTION
The influence of geometry has an upper hand in controlling vehicular movement in rural
(non-urban) highways where speed matters much more than traffic volume. Highways
through rural areas generally support intercity trips. Drivers read the road ahead of them
and adopt a speed that seems comfortable to them. Any unexpected road feature in the high-
way may surprise the driver and lead to erroneous driving maneuvers which in turn may end
up in crashes. As highways are meant for high speed travel, the impact of any collision that
occurs will be of a grievous nature. Hence, highways need to be designed in such a way that
the geometry itself guides a driver to adapt a maneuver fitting the environs.
To improve traffic safety, designers and planners use many tools and techniques. One
technique used to improve safety on roadways is geometric design consistency. Design con-
sistency refers to the highway geometry’s conformance with driver expectations. Generally,
drivers make fewer errors at geometric features that conform to their expectations.
There are several measures for evaluating the consistency of geometry. These measures are
classified as speed-based measures, alignment indices, vehicle stability-based measures and
driver workload measures (Fitzpatrick et al., 2000). Among the measures, the driver work-
load measure is the sole method which directly considers the effect of geometry on drivers. As
drivers are the major road users, it is always logical and appropriate to evaluate a road design
from the view point of its major beneficiaries. Workload will be increased as and when the
mismatch between what a driver observes in the field and what he expects increases. When this
inconsistency increases beyond a limit, the driver may adopt an erroneous driving maneuver
and this may result in a crash (Messer et al., 1981; Kannellaidis, 1996). Maintaining design
consistency minimizes driver workload and thereby reduces the chances of a crash.
This study is part of the research being carried out to derive guidelines for geometric
design based on driver workload. The prime objective of this paper is to focus on the effect
149
2 BACKGROUND
At present in India, a highway is designed based on a design speed concept. Highway design-
ers design different elements of highways based on a standard value of design speed and on
the premise that drivers will not exceed the speed limit set below the design speed for traffic
safety. However, speed studies have always pointed to the fact that drivers adopt their speeds
according to the highway ahead and may exceed both the speed limit and the design speed
(Leisch & Leisch, 1977). This mismatch has been attributed mainly to the fact that design
speed theory does not consider the effect of highway geometry on driver’s performance and
the variations within and between design elements.
Driver performance can be evaluated based on the workload involved while performing
the driving task. The driver is almost continuously gaining visual and kinaesthetic informa-
tion, processing it, making decisions and performing accordingly. This includes tracking the
lane, speeding up and down as demanded by the geometry, steering the vehicle and many
other tasks (Brookhuis & de Waard, 2000). A driver can only adapt to the road system as he
gathers most of the information from the road itself. The various characteristics that contrib-
ute to workload are vertical and horizontal alignment, consistency of road elements, roadside
obstructions, presence of other vehicles and environment (Weller et al., 2005). Thus, a road
system should be designed as “user friendly” in order to reduce driver errors. Hence, rating of
a highways based on driver workload will be more commensurate with reality.
Perception of highway alignment, especially complex or hazardous ones can induce stress
on drivers. Thus, driver workload is highly correlated with accidents. If workload drops too
low or rises too high, the collision rate can increase. If workload is too low, drivers may
become inattentive or tired, and their responses to unexpected situations may then be inap-
propriate or slow. At the other end of the scale, drivers may become confused by very high
workload. In these situations, drivers may overlook or misinterpret an unexpected occur-
rence and either not respond until too late or respond inappropriately. This is closely related
to the arousal law that takes the form of an inverted U-shaped function between arousal level
and performance (Yerkes & Dodson, 1908).
There are five broad categories of workload measurement techniques (Green et al., 1993)
including primary task measurement, secondary task measurement, physiological measure-
ments, subjective techniques and input control. In this study, a physiological measurement
technique is made use of to record driver workload.
Several non-invasive physiological measurements are thought to measure aspects of opera-
tor state that correlate with the ability to perform tasks. Measures explored include heart rate
and heart rate variability (Mulder, 1980) and eye-blink rate (Stern, Walrath & Goldstein,
1984). Heart rate tends to increase with an increase in workload. The rate of eye-blink can
be used as a measure of visual demand. Eye-blink rate decreases with an increase in visual
demand. Other measures are galvanic skin conductance, pupil diameter, blood pressure, res-
piration rate, hormone levels and electromyogram. Helander (1978) studied the effect of nar-
row roads and narrow shoulders on heart rate and electrodermal response of drivers and
found that perception of a complex road way induces stress in drivers. The method of assess-
ing workload through physiological measures has been found promising, though it requires
devices that can capture the physiological measures.
The literature review showed that many researchers are trying to arrive at a methodol-
ogy for quantifying the workload of drivers by means of real driving task measurements or
through simulator studies. Studies show that an abrupt increase in driver workload increases
the probability of crashing and such increases could be coined with many roadway features
150
where, A = sight distance factor; B = curvature factor; C = lane restriction factor; D = road
width factor.
Green et al. (1993) further examined the Hulse model and related workload to geometry
and found that the standard deviation of lateral positioning of the vehicle is negatively
correlated to workload. The study showed that sight distance deficiency increases driver
workload.
Krammes et al. (1995) defined workload as the portion of the total driving time that driv-
ers need to look at the roadway and measured it using the vision occlusion method. They
expressed workload in terms of the degree of curvature as given in Equation 2.
3 METHODOLOGY
A review of the literature showed that driver workload is very much related to the geometry of
the highway section. How this geometry influences driver expectancy is the subject matter of
this paper. The methodology includes three parts related to data collection:- Development
of the Road Driver Data Acquisition System (RDDAS), collection of geometric data and col-
lection of driver workload data.
151
152
Mean 161.12 46.26 18.04 6.63 2.56 144.64 95.54 83.25 83.34 521.89 417.20 394.32 0.59 0.25 0.31
Med. 152.15 38.64 15.50 6.81 2.77 115.00 96.00 82.89 82.75 523.40 415.50 393.06 0.54 0.25 0.31
Mode 157.92 20.56 11.00 6.43 0.53 90.00 96.00 82.43 81.50 526.00 415.00 #N/A 0.50 0.25 #N/A
Std. Dev 71.07 20.92 11.19 0.73 1.85 74.76 2.25 1.84 2.99 7.47 9.42 7.03 0.19 0.11 0.09
Min. 33.38 20.56 7.00 4.94 0.03 60.00 91.10 78.92 78.00 501.20 400.00 374.68 0.32 0.00 0.13
Max. 303.47 94.63 58.00 7.83 7.77 370.00 99.50 86.87 91.00 532.25 437.00 407.56 1.00 0.50 0.53
153
154
R CL DA W SE PTL
*Significant correlation.
rate, lower galvanic skin resistance and less eye blinking. At sharper curves, the driver needs to be
more attentive which makes the heart work harder. With greater workload, sweating increases,
thereby reducing skin resistance. Also, the driver needs to keep his eyes open for more time to
negotiate the curve. As superelevation and preceding tangent length to the curve increases, skin
resistance also increases, which is indicative of a reduction in workload. The greater the deflec-
tion of the curve, the lesser the rate of blinking and hence, the greater the driver workload.
A correlation study was performed to understand and quantify the linear correlation between
geometry and workload. Table 2 gives the result of the study. The radius was found to have
negative correlation with the heart rate. The curve length had 30% correlation with the 50th
percentile GSR. The deflection angle had a negative correlation with the average skin resist-
ance and rate of eye blinking. Superelevation and preceding tangent lengths are other variables
which were found to have a correlation with galvanic skin resistance. It was noted that the width
of the pavement did not have any linear correlation with any of the workload measures.
4 CONCLUSIONS
The work done based on 32 drivers’ data on 28 horizontal curves of two-lane rural highways
of Kerala has revealed that geometric design has a significant influence on driver workload.
Radius and deflection angle are two design elements of geometry that were found to have
a higher influence on geometry. Workload was found to increase at curves with a smaller
radius and higher deflection angle. The rate of eye blinking and heart rate are two promising
workload-based measures that can be made use of in estimating the consistency of geometric
design of highways. The work can be further extended to model the relationship between
155
ACKNOWLEDGMENTS
The authors acknowledge the Engineering and Technology Program division of Kerala State
Council for Science, Technology and Environment, Trivandrum for their financial support
and assistance. The technical support given by Dr. T. Elangovan., Exe. Director, Road safety,
Kerala Road Safety Authority is greatly acknowledged. Special thanks to the project staff,
Mr. M.R. Midhun and Ms. P. Maheswari for their unconditional support.
REFERENCES
Brookhuis, K.A. & de Waard, D. (2000). Assessment of drivers workload: Performance and subjective
and physiological indexes. Stress, workload and fatigue, CRC press, pp. 321–332.
Cafiso, S., Di Graziano, A. & La Cava, G. (2005). Actual driving data analysis for design consistency
evaluation. Transportation Research Record, Journal of the Transportation Research Board, 1912,
19–30.
Fitzpatrick, K., Wooldridge, M.D., Tsimhoni, O., Collins, J.M., Green, P., Bauer, K.M., Parma, K.D.,
Koppa, R., Harwood, D.W., Anderson, I.B., Krammes, R.A. & Poggioli, B. (2000). “Alternative
design consistency rating methods for two-lane rural highways.” FHWA-RD-99-172 Washington D.C.:
Federal Highway of Administration.
Green, P., Lin, B. & Bagian, T. (1993). Driver workload as a function of road geometry: A pilot experi-
ment. GLCTTR 22-91/01 Michigan: University of Michigan Transportation Research Institute.
Helander, M. (1978). Applicability of drivers’ electrodermal response to the design of the traffic envi-
ronment. Journal of Applied Psychology, 63(4), 481–488.
Hulse, M.C., Dingus, T.A., Fischer, T. & Wierwille, W.W. (1989). The influence of roadway parameters
on driver perception of attentional demand. Advances in Industrial Ergonomics and Safety, 1, 451–456.
Kannellaidis, G. (1996). Human factors in highway geometric design. Journal of Transportation Engi-
neering, ASCE, 122(1), 59–66.
Krammes, R.A., Brackett, R.Q., Shaffer, M.A., Ottesen, J.L., Anderson, I.B., Fink, K.L., Collins,
K.M., Pendleton, O.J. & Messer, C.J. (1995). Horizontal alignment design consistency for rural two-
lane highways. FHWA-RD-94-034, Washington D.C., Federal Highway Administration.
Krammes, R.A., Rao, K.S. & Oh, H. (1995). Highway geometric design consistency evaluation software.
Transportation Research Record, Journal of the Transportation Research Board 1500, 19–24.
Leisch, J.E. & Leisch, J.P. (1977). New concept in design speed applications, as a control in achieving
consistent highway design. Transportation Research Record, Journal of the Transportation Research
Board, 631, 4–14.
Messer, C.J. (1980). Methodology for evaluating geometric design consistency. Facility design and oper-
ational effects. Transportation Research Record, Journal of the Transportation Research Board, 757,
7–14.
Messer, C.J., Mounce, J.M. & Brackett, R.Q. (1981). Highway geometric design consistency related to
driver expectancy. FHWA-RD-81-035. Washington DC: Federal Highway Administration.
Mulder, G. (1980). The heart rate of mental effort (Doctoral thesis). University of Groningen, Gronin-
gen, Netherlands.
Stern, J.A., Walrath, L.C. & Goldstein, R. (1984). The endogenous eyeblink. Psychophysiology, 21,
22–33.
Suh, W., Park, P., Park, C. & Chon, K. (2006). Relationship between speed, lateral placement, and driv-
ers’ eye movement at two-lane rural highways. Journal of Transportation Engineering, 132(8), 649–653.
Weller, G., Jorna, R Gatti, G. (2005). Road user behaviour model. Sixth framework programme,
RIPCORD-ISEREST Deliverables D8.
Wooldridge, M.D., Fitzpatrick, K., Koppa, R. & Bauer, K. (2000). Effects of horizontal curvature on
driver visual demand. Transportation Research Record, Journal of the Transportation Research Board,
1737, 71–77.
Yerkes, R.M. & Dodson, J.D. (1908). The relation of strength of stimulus to rapidity of habit-formation.
Journal of Comparative Neurology and Psychology, 18, 459–482.
156
ABSTRACT: Tourism planning refers to the integrated planning of attraction, service (e.g.,
accommodation, restaurants, shops, medical facilities, postal services etc.,) and transporta-
tion facilities. This paper discusses the application of Geographic Information System (GIS)
as a route planning system for Tourists in Thiruvananthapuram City. Every tourist who is
alien to the place faces the problem of improper planning. The tourists may not be aware of
the route and the proper time to visit the tourist locations. A properly planned trip saves time
and money. Optimal Route is the route with least cost ie., least time or least distance. Given
a number of tourist location and the services required to be visited by the traveller, this mod-
ule computes the best route within a minimum time or minimum distance depending on the
impedance chosen. If a tourist specifies their origin and the tourist spots they wish to visit a
day, the system recommends the best route to travel based on the preferred time of visit or the
visiting hours of the locations. A web based GIS portal was also developed which would help
tourists get information about the study area in a single portal. It also has added capabilities
to the existing WebGIS system developed by the tourism department.
1 INTRODUCTION
Tourism planning is associated with the locations and interrelations of tourism infrastructure and
hence, they have to be analyzed within a spatial context. Therefore, GIS can be regarded as the
perfect platform for tourism planning. GIS is a rapidly expanding field that enables the develop-
ment of tourism applications in combination with other media. GIS operates mainly on two ele-
ments: Spatial data and Attribute data. Spatial data refers to the geographic space occupied by the
point and attribute data refers to the associated information such as name, category etc. In this
context, spatial data refers to the location of different tourist locations and the facilities such as
shops, hotels, movie theatres, bus stops etc., around them. Attribute data would include the road
name, road type, and travel time through each segment. Suggesting the optimal route, by consid-
ering the time window of the preferred locations, is of great help to tourists, who are unfamiliar
with the city. Route planning, an important stage of transportation planning can be applied here.
WebGIS refers to Geographic Information Systems that use web technologies as a method of
communication between the elements of a GIS. WebGIS provides a perfect tool to access, dis-
seminate and visualize tourism data. Any information that can be displayed on a digital map can
be visualized using WebGIS. The implementation of internet based GIS will provide interactive
mapping and spatial analysis capabilities for enhancing public participation in decision making
process. Web GIS have the advantages of global accessibility, better cross-platform capability,
diverse applications Sharma (2016). In addition to this, the capabilities of internet based GIS
will make it possible to answer spatial queries using intelligent maps with integrated images, text,
tables, diagrams; and showing locations of hotels, tourist sites, points of interest and so forth.
The transition from desktop GIS to WebGIS help people as it allows them the ability to search
information that could meet their needs freely and easily Sandinska (2016).
Gill & Bharath (2013) explored the application of GIS based Network analysis for route
optimization of tourist places in Delhi city. The study determined the optimal route from the
tourist origin to destination including visiting time at each tourist destination. Route analysis
157
2 STUDY AREA
The study was conducted at Thiruvananthapuram, the capital of Kerala, the southernmost
state in India, The map of the study area is shown in Figure 1. The study area is around
226 km2. With a road density of 18 km per sq. km area, nearly seven per cent of the urban
land is under transport use. The National Highway NH 66 passes through the city. The Main
Central Road (MC Road) which is an arterial State Highway in Kerala and designated as
SH1starts from NH-66 at Kesavadasapuram in the city.
Trivandrum, with its pristine sandy beaches, long stretches of palm fringed shorelines, breezy
backwaters, historic monuments & rich cultural heritage embraces tourists of all kinds. Thiru-
vananthapuram is also a holy abode with many temples known for their excellent architecture &
hence a thriving pilgrim destination too. The major tourist attractions within the city such as
Aakulam and Veli tourist village, Napiers museum, Priyadarsini Planetorium, Kuthiramalika,
Kanakkakunnu palace, Sree Padmanabha Temple etc., are also shown in Figure 1.
3 METHODOLOGY
The step by step methodology adopted in this study is discussed briefly below. The analysis
was carried using Network Analyst of ArcGIS 10.1.
158
159
information system for Bhopal City, India. The clients access the portal through internet
browser through HTTP requests. The Web server is the software that responds to requests
from the client via the HTTP protocol. The Web server or the map server is a type application
server with manageability, processing and visualization of spatial data. The main purpose of
Map servers is the acquisition of spatial data from a spatial database. Tyagi (2014).
Geoserver has been used as the GIS server and Apache Tomcat acts as the Web server.
GeoServer is a Java-based software server that allows users to view and edit geospatial
data. GeoServer allows the display of spatial information to the world. Implementing the Web
Map Service (WMS) standard, GeoServer can create maps in a variety of output formats.
OpenLayers, a free mapping library, is integrated into GeoServer, making map generation
quick and easy. Apache Tomcat is an open source software implementation of the Java Servlet
and JavaServer Pages technologies. PostGIS is a spatial database. PostgreSQL is a powerful,
object-relational database management system (ORDBMS). PostGIS turns the PostgreSQL
Database Management System into a spatial database by adding support for the three features:
spatial types, indexes, and functions. As it is built on PostgreSQL, PostGIS automatically inher-
its important features as well as open standards for implementation. pgRouting is an extension
of PostGIS and adds routing functionality to PostGIS/PostgreSQL. pgRouting is a further
development of pgDijkstra. Dijkstra algorithm was implemented in the route finding module
of the WebGIS portal. Java was the scripting language used in both server and client side.
The main analysis carried out in this study is optimum route analysis and route planning. This
section focuses on finding out the optimal route between tourist destinations. Optimal route is
that which has the least cost i.e., least distance or the least travel time. Given a number of tourist
location and the facilities required to be visited by the traveller, this module computed the best
route within a minimum time or minimum distance. Solving a route analysis can mean finding
the quickest or shortest, depending on the impedance chosen to solve for. If the impedance is
time, then the best route is the quickest route. Hence, the best route can be defined as the route
that has the lowest impedance, or least cost, where the impedance is chosen by the user. Any
cost attribute can be used as the impedance when determining the best route. The optimal route
between the selected locations is shown in Figure 3. The direction map is displayed in Figure 4.
The main component of this system is the route planning module which helps the tourists to
find the best sequence of visit to the desired tourist spots based on the time of visit of the tourist
destinations. This helps the tourists to optimise their trips and thus plan their day with minimum
waste of time. This would be of great help to the tourists, who are new to the place. Using the
identify tool, it is possible to obtain the image and other attributes of the desired locations. The
time of visit of the places along with its type will be displayed. In addition to this, it provides
the contact numbers as well as the holidays of the tourist place. If tourists specify the starting
location of their journey and the final destination from where they have to board, the sequence
of visit of the location is obtained based on the visiting time of the selected tourist locations.
The user has the option of preserving the first and last stops and specifying the starting time of
the journey as shown in Figure 5. The Route solver in Network Analyst generates the optimal
160
161
sequence of visiting the stop locations as in Figure 6. The best itinerary based on the visiting
time of the selected tourist location chosen by the user is provided.
The home page of the interface is shown in Figure 7. This page welcomes the user into the
route planning system. Once the user clicks the enter button, the user enters into the next
page in which both the information system interface and the optimal route can be accessed.
The interface provides basic information about the tourist spots. On clicking the tourist spot
the details such as category, time of visit, contact number as well as the holidays are listed along
with the picture of the tourist spot as shown in Figure 8. Other layers such as bus stops, hospi-
tals, police stations, railway stations, movie theatres, shopping centres etc., are also displayed on
the map. By checking the required layers, it is possible to display them on the map. On clicking
on the specific points, their name can be viewed. On clicking the road network, the names of
the road can also be seen. The legend is also provided below. The zoom level can be adjusted as
per the need of the user and it is also possible to pan the map to focus the required area.
162
5 CONCLUSION
The application of GIS in tourist route planning is discussed in this study. The optimal route
option displays the route with shortest route or quickest route based on the impedence selected.
Route planning allows the tourist to plan his trip in an orderly manner without wasting time.
A WebGIS based portal was developed in this system which would help tourists to get infor-
mation about the location easily. The portal was developed with only the basic routing facili-
ties. This can be extended by including the other route planning functionalities mentioned in
this paper. The portal has been hosted in a local server and can be published in a public server
that can be accessed by all. Bus is the transport mode chosen for the study; the bus schedule
can be included at the implementation stage. The study can be further extended to incorporate
other modes of transport. An option for choosing the mode of transport can be provided.
ACKNOWLEDGEMENT
The study was funded by Transportation Engineering Research Center (TRC). The authors
would like to acknowledge National Transportation Planning and Research Centre (NAT-
PAC) and Information Kerala Mission (IKM). The authors would also thank Miraglo Pvt.
Ltd., for the technical help extended for developing the web portal.
REFERENCES
Arora, A. & Pandey, M. K. 2011. Transportation Network Model and Network Analysis of Road Net-
works, 12th ESRI India User Conference, December 7–8, Noida, India, 1–9.
Gill, N. & Bharath, B. D. 2013, Identification of Optimum path for Tourist Places using GIS based
Network Analysis: A Case Study of New Delhi, International Journal of Advancement in Remote
Sensing, GIS and Geography 1(2): 34–38.
Kawano, H. & Kokai, M. 2009. Multi-Agents Scheduling and Routing Problem with Time Windows
and Visiting Activities, The Eighth International Symposium on Operations Research and Its Applica-
tions, Sept. 20–22, 442–447.
Kumar, A. & Diwakar, P. S. 2015. Web GIS based Land information System for Bhopal City using
open Source Software and Libraries, International Journal of Science, Engineering and Technology
Research 4(1): 154–160.
Nair, S. 2011. Web Enabled Open Source GIS Based Tourist Information System for Bhopal City, Inter-
national Journal of Engineering Science and Technology 3: 1457–1466.
Sandinska, Y. 2016. Technological Principles and Mapping Applications of Web GIS, Proceedings, 6th
International Conference on Cartography and GIS, Albena, Bulgaria, 13–17.
Sharma N. 2016. Development of Web-Based Geographic Information System (GIS) for Promoting Tour-
ism in Sivasagar District, International Journal of Innovation and Scientific Research 24(1): 144–160.
Tyagi, N. 2014. Web GIS application for customized tourist information system for Eastern U. P., India,
Indian Society of Geomatics, 8(1): 1–6.
Varsha P. B., Venkata Reddy K. & Navatha Y. 2016. Development of Webgis Based Application for
Tourism, Esri India Regional User Conference – 2016.
Zerihun, M. E. 2017. Web Based GIS for Tourism Development Using Effective Free and Open Source
Software Case Study: Gondor Town and its Surrounding Area, Ethiopia, Journal of Geographic
Information System, 9: 47–58.
163
1 INTRODUCTION
Extensive worldwide urbanization has led to unsustainable growth of cities. The increased
number of vehicles on urban roads in shorter span has produced a need for rapid expan-
sion of transport facilities which in turn has resulted in inefficient transport systems. The
percentage increase in the urban population from 1980 to 2000 was 48%, which is a drastic
difference. The urbanization level has almost stabilized in developing countries and a rapid
on-going process in the developing countries like India (Asian and African countries). Cities
need to expand to accommodate rural immigrants due to the increase in the area of cities.
Considering the extra space and providing the necessary infrastructure for the smooth life of
the dwellers, without affecting the overall level of service of road infrastructure, is a challeng-
ing task for transport system authorities and policy makers.
Stead and Marshall (2001) stated that urban planning has an important role to play in
helping to achieve more sustainable travel patterns. They pointed out that transport supply
and parking, as well as the distribution of land use, is influenced by planning policies hav-
ing an impact on travel demand and commuters’ mode. Frank and Pivo (1994) evaluated
the relationships between employment density, population density, land-use mix and single-
occupant vehicle (SOV) usage and found that they were consistently negative for both work
and shopping trips. They also found that the relationships between employment density,
population density, land-use mix, and transit and walking were consistently positive for both
work trips and shopping trips.
Advani et al. (2015) studied the relationship between urban population characteristics and
travel characteristics of different Indian cities with varying sizes and population densities.
Their findings reveal that city size plays a vital role in trip rate as well as trip length. They also
found that an increase in city size resulted in an increase in motorized trips. Crane (2000),
after a thorough investigation, expressed the importance of the geographic scale of the city
in favoring motorized trips, and the relevance of location of residences inside the urban area.
165
2 BACKGROUND OF STUDY
This paper focuses on the following aspects of medium sized cities in Kerala:
1. The urban form aspects like land area and population density.
2. Travel characteristics like trip length, trip rate and mode share.
3. Demographic aspects like income, household size and vehicle ownership.
The secondary data used for the study were taken from the Comprehensive Mobility Plan
study reports prepared for Kochi, Trivandrum and Calicut.
4 ANALYSIS
4.3 Population density, household size, trip length and trip rate
It was found that the relationship between population density and average trip length shown
in Figure 4, is directly proportional, in that as the density decreases, the trip length increases.
This is due to the wide distribution of households throughout the city area.
Table 1. Population density, average trip length and average trip rate for the
selected study areas.
167
Figure 4. Population density and average trip Figure 5. Population density and average trip
length. rate.
Similarly, when comparing density and trip rate, as shown in Figure 5, the relationship is
inversely proportional, in that as density increases, the trip rate increases. This is because the
households, shopping areas, offices and other institutions are closely spaced and easy access
to these places tends the people to have frequent trips.
Upon comparing average household size and average trip rate, as shown in Figure 6, it was
found that the average trip rate increases with an increase in average household size.
168
Kochi 42 36 7 15
Trivandrum 43 46 4 7
Calicut 41 34 7 18
lifestyle than the city area. Even though there is a reasonable mode share for public transport,
the private vehicle share constitutes nearly half of the total mode share in Trivandrum city.
5 CONCLUSION
Based on the above analysis, it was found that urban form, demography and travel charac-
teristics are highly interrelated in medium sized cities in developing countries. Understand-
ing the extent of the relationship among these parameters enables the transport planner to
forecast solutions for urban travel problems.
The following are the major conclusions derived from the study:
1. The area of the city and its population density has an inverse relationship between them.
2. As the city area increases, the average trip length and average trip rate increase.
3. It was found that the average trip length increases with a decrease in population density
and the average trip rate increases with an increase in population density.
4. The average trip rate increases with an increase in average household size.
5. Vehicle ownership increases with an increase in average income.
There is a need for change in urban lifestyles so as to maintain a balance between the
use of motorized modes and non-motorized modes such as walking, cycling and so on. It
was also noted that the share of private vehicles like two-wheelers and cars is increasing.
Providing good quality public transport networks may encourage users to shift to public
transport.
The study is limited to three medium sized cities in Kerala. Therefore, the relationship
between the considered parameters may vary depending on the location of the city, economic
169
REFERENCES
Dominic, S. & Stephen, M. (2001). The relationships between urban form and travel patterns. An inter-
national review and evaluation, European Journal of Transport and Infrastructure Research (EJTIR),
1(2), 113–141.
Lawrence, D.F. & Gary, P. (1994). Impacts of mixed use and density on utilization of three modes of
travel: Single occupant vehicle, transit and walking. Transportation Research Record, 1466, 44–52.
Mukti, A., Neelam, J.G., Purnima, P. & Durai, B.K. (2015). Inter-relationship between transport sys-
tem, safety and city sizes distribution. Institute of Town Planners India, Journal, 12(4), 51–62.
Pranati, D. (2006). Urbanization in India, regional and sub-regional dynamic, population process in
urban areas. European Population Conference 2006, The University of Liverpool, UK, 21–24 June.
Randall, C. (2000). The influence of urban form on travel: An interpretive review. Journal of Planning
Literature, 15(1). 1–23.
Report on Comprehensive Mobility Plan for Thiruvananthapuram City. (2016). National Transporta-
tion Planning and Research Centre. December 2016.
Report on Comprehensive Mobility Plan for Kozhikkode City. (2016). National Transportation Planning
and Research Centre. December 2016.
Report on Comprehensive Mobility Plan and Parking Master Plan for Greater Cochin Region. (2015).
Urban Mass Transit Company Limited & Kochi Metro Rail Limited, Interim Report. November 2015.
170
1 INTRODUCTION
Cities in the developing world are in search of sustainable solutions to their accessibility
and mobility issues. The process is complicated by the rapid pace of urbanization, which is
characterized by motorization, the co-existence of motorized and non-motorized modes,
deteriorating public transport services and institutions, along with deteriorating air quality.
Travel demands have grown faster than the population and expansion of the city resulting in
delays for the movements to the city center/activity center from the suburbs/satellite towns
and vice versa. Public transit systems are struggling to compete with private modes in urban
cities and the shift is noticeable in developing countries as well, with; the predominant modes
being cars, two-wheelers and other intermediary modes. The agencies operating often fail to
respond to the demands. The resultant outcomes in most Indian cities have been increasing
congestion due to the increasing private modes, accidents and issues of rising air pollution
levels. The integration of multimodal transportation systems explores the coordinated use of
two or more modes of transport for speedy, safe, pleasant and comfortable movement of pas-
sengers in urban areas. It provides convenient and economical connections of various modes
to make a complete journey from origin to destination. Generally, it has been characterized
by increased capacity, efficient access, reduced road congestion, longer journey times and
reduced air pollution. The Integrated Multi Modal Transport System (IMMTS) comprises
of one trip that involves two or more than two different modes of transportation. Muley and
Prasad (2014) defines integration of multimodal transit as the way parts of public transport
network are embedded in total mobility chain. The study conducted by Cheryan and Sinha
(2015) identified the increasing need of travel due to rapid urbanization leading to increasing
congestion and necessity for the development of public transit system by providing an inte-
grated transit service. The major goal of integration of a multimodal transport system is to
promote public transport in urban areas. A well-coordinated integration of different modes
brings about greater convenience for commuters, efficiency and cost effectiveness.
171
3 METHODOLOGY
The core part of the study lies in the formulation of a well-defined methodology that ensures
an effective proposal of suggestions for an efficient integration of multiple modes in the city
of Kochi. The methodology adopted for the study is shown in Figure 2. At first, a detailed
literature survey was conducted to investigate studies on the integration of multimodal sys-
tems in various cities. Then, the study areas were selected which were the two metro stations
at Kaloor and Edappally. Data were collected from the study areas by conducting different
surveys among the commuters using car, bus, two-wheeler and auto rickshaw as modes. The
surveys conducted were a boarding and alighting survey, bus passenger interviews, as well as
a stated preference survey. The collected data were analyzed and suitable recommendations
were framed.
172
Edappally 45% of the buses have an occupancy 30% of the buses are full
level of full sitting plus full standing sitting
Kaloor 40% of the buses are full sitting plus 80% of the buses are half
half standing sitting
Figure 3. Access mode from origin to boarding Figure 4. Egress mode from alighting point to
point. destination.
174
It was found that approximately 31% of the access trips to the bus stops were more than
5 km and 40% of the egress trips from the bus stops were within 500 m. Another key obser-
vation is that more than 30% of the access and egress trips have a trip distance greater than
1 km.
Waiting Time at the Bus Stop: Figure 6 represents the distribution of commuters according
to the waiting time at the bus stop. It was found that about 46% of the boarding passengers
wait up to five minutes to board the bus followed by 42% of commuters having a waiting time
of 7.5 minutes to 15 minutes.
S P Survey: The major observations from the survey are as follows:
Mode and Trip Purpose: Figure 7 gives the distribution of trip purpose among commuters
considering different types of mode. The majority of the commuters travel with a trip pur-
pose of work followed by business and education.
Mode and Travel Time: Figure 8 gives the distribution of travel time among respondents
considering different types of mode. It was found that the majority of the commuters have a
travel time of 20–40 minutes. It was also found that the majority of the commuters traveling
by bus have a travel time greater than 40 minutes.
Mode and Travel Cost: Figure 9 gives the distribution of travel cost among respondents
considering different types of mode. It was found that the majority of trips involving a car
have travel costs greater than 40 Rs and trips using autorickshaws have travel costs less than
20 Rs.
Mode and Travel Distance: Figure 10 represents the distribution of trip length by different
modes of transport. It was found that the trips having a travel distance greater than 20 km
are mostly performed by bus. It was also found that the commuters having a travel distance
less than 20 km prefer cars and two-wheelers as their modes.
Monthly Income and Mode of Travel: The main purpose of comparing monthly income
with mode of travel is to observe the category of commuters with different income levels
using different types of mode. It was found that the higher income commuter groups mainly
prefer cars and two-wheelers. Figure 11 represents a comparison of different modes of travel
with income. It was found that the higher income groups did not prefer public transport,
whereas low income groups prefer public transport for their travel.
Daily Travel Cost and Monthly Income: Figure 12 represents the comparison of monthly
income with travel cost.
Stated Preference: A S P survey was also conducted by giving five different scenarios
to the passengers and their opinions were sought regarding their willingness to shift to a
public transport system varying in terms of speed, travel time, as well as comfort and con-
venience. The details of each scenario with its characteristics are shown in Table 3. It was
found that about 49% of the users were satisfied with the existing scenario. About 27% of
the commuters were willing to shift to public transport if travel time and waiting time are
reduced by 25%.
175
Figure 9. Distribution of travel costs by mode. Figure 10. Comparison of mode with travel
distance.
Scenarios
Parameters 1 2 3 4 5
Waiting time Existing Existing Reduced by 25% Reduced by 25% Reduced by 50%
(min)
Travel time Existing Existing Same as existing Reduced by 25% Reduced by 25%
(min)
Travel cost Existing Existing Increased by 25% Increased by 25% Increased by 50%
(Rs.)
Transfers Existing Yes Yes No No
Comfort Existing Crowded + Sitting + Standing + Sitting + Non-AC Sitting + AC
level Non-AC Non-AC
Percentage of 49% 11% 10% 27% 3%
users opting
176
The major integration strategies adopted for the study are the following:
a. Physical Integration
1. W alkability: Walking was found to be the predominant mode from alighting point to
destination of the bus users. Pedestrian facilities need to be enhanced and the key fac-
tors to making walking appealing are safety, activity and comfort. For comfortable and
safe walking, footpaths of a minimum width of 1.8 m should be provided.
2. Cyclability: Cycling is an emission free, healthy and affordable transport option that is
highly efficient and consumes little space and few resources. It combines the conven-
ience of door to door travel, the route and schedule flexibility of walking and the range
and speed of many local transit services.
3. Connectivity: Short and direct pedestrian and cycling routes require highly connected
networks of paths and streets around small, permeable blocks. A tight network of
paths and streets offering multiple routes to many destinations can also make walking
and cycling trips varied and enjoyable. Frequent street corners and narrower rights of
way with slow vehicular speed and many pedestrians encourage street activity and local
commerce.
4. Multimodal Integration: Integration of mass transit with the city bus system and
extended to the Non Motorized Modes such as bicycles as well as Intermediate Para
transit system not only ensure first and last mile connectivity but also the seamless
transfer of network users. Multimodal integration exists at various stages such as: insti-
tutional, physical, fare, operational. The level at which multiple modes need to be inte-
grated shall depend on the local city authorities.
5. Information Integration: This is the information system placed on-board in the bus. It is
a real time information display unit which helps in providing passengers with the neces-
sary information related to their commute. The information displayed varies from the
next/current stop information, current location, expected time to reach the destination
or the nearest metro station and so on. Route data may be presented as a linear map,
highlighting the current position of the bus and the next stop that it is approaching.
These maps are held as image files on the vehicle computer and displayed when the bus
reaches a pre-specified geolocation. Likewise, audio files can be delivered in exactly the
same way, notifying passengers of a particular stop as the vehicle approaches.
b. Fare Integration
In the current scenario, it is imperative that the fare collection system for the city bus trans-
port, feeder and metro be integrated for the ease of the passengers. The data from ticket
issuing machines should be downloaded into the central room server for further analysis.
These data which include denomination wise ticket sales, number of passengers, load factor
at each stage of the route and so on, are stored in the computer for further analysis. Based
on these data, the operations of the bus system can be monitored and documented for
future planning of operations. These data will be of great help for route restructuring, route
analysis, introduction/curtailment of routes and so on. Two important components of fare
integration are:
1. Hand held ticketing devices are small ticket vending units which are capable of producing
tickets as per the specified distance or stoppages.
2. Smart cards are small plastic cards which hold a certain value of money and can be used
as a fare paying system at metro stations.
5 CONCLUSIONS
The existing transport system of Kochi city was analyzed as part of the study. At Edappally
and Kaloor, the majority of the public transport buses had an occupancy level exceeding their
177
REFERENCES
Arentze, T.A. & E.J.E. Molin (2013), Travelers’ preference in multimodal networks: Design and results
of a comprehensive series of choice experiments, Transportation Research Part A 58 (2013), 15–28.
Cheryan, C.A. & S. Sinha (2015). Assessment of transit transfer experience: Case of Bangalore, 8th
Urban Mobility India Conference & Expo.
Li, L., J. Xiong, A. Chen, S. Zhao, & Z. Dong (2014). Key strategies for improving public transportation
based on planned behaviour theory: Case study in Shanghai, China, Journal of Urban Planning and
Development 141(2): 04014019.
Muley, B.R. & C.S.R.K. Prasad (2014). Integration of public transportation systems. Urban Mobility
India Conference & Expo, Transportation Division NIT Warangal.
Najeeb, P.M.M. (2008). Study of cognitive behavior therapy for drivers improvement, Australasian
Road Safety Research, Policing and Education Conference, Adelaide, South Australia.
Patni, S. & Ghuge, V.V. (2015). Towards achieving multimodal integration of transportation systems for
seamless movement of passengers: Case study of Hyderabad City. Urban Mobility India Conference
& Expo.
178
ABSTRACT: Bus terminals are enclosures where the interactions among passengers are
very high. Activities like walking, waiting, route choice, boarding and alighting of buses
occur. Pedestrian level of service is an overall measure of walking conditions on a path, facil-
ity or route. In this study, the quantitative evaluation of a terminal is conducted using time
space analysis. The selected site for the study is Aluva Kerala state road transport corpora-
tion bus terminal. The main aim of the study is to determine the level of service of the termi-
nal. Queuing and walking are considered as the major passenger activities inside a terminal.
Different activities require a different amount of space and time. The time and space utilized
by passengers for standing and circulation is considered.
1 INTRODUCTION
179
2 STUDY AREA
Aluva formerly called as Alwaye, is the second biggest town of Greater Cochin City in the
Ernakulam district of Kerala. It is also considered as the industrial and commercial city of
Kochi and it is the industrial epicenter of the state. Aluva is located on the banks of River
Periyar. Aluva is also a major transportation hub, with easy access to all major forms of
transportation. Aluva acts as a corridor which links the highland districts to the rest of the
state. Aluva is more famous for its accessibility through rail, air, metro and by bus.
Aluva KSRTC bus terminal is one of the major bus depos in Kerala. Buses from all other
parts of Kerala have services to this station. Bus services from places like Mysore, Mangalore,
Bangalore, Trichy, Coimbatore, Salem, Palani, Kodaikanal and so on are also available. At
present, the depo has 107 buses of its own and seven interstate services.
180
After the pilot survey of different bus stations, it was noticed that Aluva KSRTC bus termi-
nal is the one with less facilities compared to the other stations nearby. The study of this bus
terminal was very timely in order to determine the pedestrian activities within it and thereby,
to determine the LOS. Aluva KSRTC bus terminal acts as a center that connects different
parts of the state. Because of that, many categories of passengers use the bus service from
this station for different purposes such as work trips, business trips, shopping and recreational
trips. The major share of the passengers includes students and private sector officials.
Figure 1 shows the layout of Aluva KSRTC bus terminal. The entry and exit points for
buses and passengers are shown and also the different zones selected for study are depicted
in the layout.
3 METHODOLOGY
TS = A × AP (1)
181
where TS = total available time-space, in m2.min; A = visible platform area, in m2; and
AP = analysis period, in min.
ii. Calculate holding area time space requirements (TSh)
where TSh = holding area time space requirements, in m2.sec; SP = number of standing pedes-
trians; AST = average standing time, in sec; AASP = standing pedestrian module, in m2/ped.
iii. Calculate the net circulation area time-space (TSc)
MOD = (TSc/TC ) (5)
4 DATA USED
The study site was Aluva KSRTC bus terminal. The quantitative analysis of the terminal
would help us to identify the real conditions of the terminal. After conducting a pilot survey,
the different zones inside the terminal were identified. From the survey, five different zones
where major pedestrian activities occur were selected. The zones are the waiting area, area
near the bus bay, hotel or restaurant, mini snacks bar area and shopping area. Areas of all
the zones were measured manually. A pedestrian volume survey was conducted to obtain the
182
Area near Standing 192 228 296 272 116 200 12.25
bus bay pedestrians
Walking 224 164 272 312 168 188
pedestrians
Waiting Standing 228 256 372 300 240 184 10.8
area pedestrians
Walking 220 180 180 272 140 176
pedestrians
Shopping Standing 108 164 104 228 44 80 8.2
area pedestrians
Walking 76 80 64 140 88 88
pedestrians
Hotel or Standing 32 40 60 76 48 76 15.58
restaurant pedestrians
Walking 24 24 8 60 60 72
pedestrians
Mini snacks Standing 40 56 80 108 64 80 6.8
bar area pedestrians
Walking 68 60 68 100 96 92
pedestrians
number of passengers occupying each zone and the average time spend by passengers for
each activity. Standing and circulating activities of passengers are included in this study.
The survey was conducted manually for three different days, including a typical working
day, a weekend and a holiday. The survey was conducted during the morning peak hours and
evening peak hours in a time interval of 15 minutes. Table 2 shows the number of passengers
present in each zone during the time of the survey. The morning peak time was identified
from a pilot study from 8.00 am to 10.00 am and evening peak time was from 3.30 pm to
6.00 pm. Data in the table depicts the passengers in every hour of survey. The data collected
include the area of all the zones, number of standing pedestrians, number of circulating
pedestrians, average standing time of pedestrian, average circulation time of pedestrians and
standing pedestrian module. After the data collection, the calculations were done using the
equations given in section 3.2.
The calculation steps were automated using a tool called the Structured Query Language
developer. The SQL developer uses computer programming language called SQL. SQL is
a set of statements with which all programs and users access data in an Oracle database.
The SQL provides benefits to managers, end users, application programmers and database
administrators. Technically speaking, SQL is a data sublanguage. It works with a set of data
rather than individual units. For the data input we have to create a survey table which includes
all the data collected from the survey. Then, we have to prepare a query with equations used
for calculation for reading the data from the table and applying it to the calculation steps.
The final result is that the pedestrian module will compare with Table 1 and then the LOS is
determined.
The MOD calculated for each zone is in the range of 0.2 or less which indicates the LOS
grade is ‘F’. This indicates that the bus terminal has a poor performance with congested con-
ditions. There is interpersonal space and a stand only condition exists at peak hours.
183
• Walking and queuing were the major activities occurring in the terminal.
• The evaluation was done using time-space analysis.
• The time and space utilized by passengers were considered.
• Five zones of the terminal were identified where the major passenger activities occur.
• The area of all the zones, number of standing pedestrians, number of circulating pedes-
trians, average standing time of pedestrians, average circulation time of pedestrians and
standing pedestrian module were determined.
• In a quantitative aspect or by means of time-space analysis, the LOS was found as LOS
grade F, which indicates a congested and a standing only possible condition.
• The results indicate low quality of terminal and lack of space availability during peak
hours.
The data collection was done manually due to the complex structure of the bus terminal and
due to some administrative implications. This could be a videographic survey to gain more
precise values.
The work could be extended to greater number of terminals.
The comparison of the LOS values of different terminals could be done.
Only a quantitative evaluation was carried out in this study. Quantitative as well as qualita-
tive analysis could be incorporated in the same study.
Classification on the basis of long distance travelers, short distance travelers and so on
could be done.
ACKNOWLEDGMENTS
The authors gratefully acknowledge the effort of classmates of the final semester trans-
portation engineering programme at RIT, Kottayam, who played a great part in the data
collection.
REFERENCES
Chang, C.C. (2009). A model for the evaluation of airport service quality. Proceedings of the Institution
of Civil Engineers transport 162, 4, 207–213.
Correia, A.R. & Wirasinghe, S.C. (2008). Analysis of level of service at airport departure lounges: user
perception approach. Journal of Transportation Engineering, 134, 105–109.
Demestsky, M.J., Hoel, L.A. & Virkler, R.M. (1977). A procedural guide for the design of transit stations
and terminals. Department of civil engineering, University of Virginia, Charlottesvi, le, Virginia.
Department of NewYork Rails. (2011). Station capacity assessment guidance.
Eboli, L. & Mazzulla, G. (2007). Service quality attributes affecting customer satisfaction for bus tran-
sit. Journal of Public Transportation, 10 (3), 21−34.
Geeva, G. & Anjaneyulu, M.V.L.R. (2015). Development of quality of service index for bus terminals,
Proc. 2nd Conference on Transportation Systems Engineering and Management. NIT Tiruchirappalli,
India, May 1–2.
Grigoriadou, M. & Braaksma, J.P. (1986). Application of the time-space concept in analyzing metro
station platforms. Journal of Institute of Transportation Engineers, 33–37.
Litman, T. (2008). Valuing transit service quality improvements. Journal of Public Transportation, 11, 2.
Main Roads Western Australia. (2006). Guidelines for assessing pedestrian level of service. 1–9.
Transit Translink Authority. (2012). Public Transport Infrastructure Manual.
Transport Research Board. 2000. Highway capacity manual, National Research Council, Washington D.C.
184
H. Ramesh
Department of Applied Mechanics, National Institute of Technology, Surathkal, Karnataka, India
ABSTRACT: Ground water flow and the solute transport model MODFLOW and
MT3DMS were established to determine the spread of contamination from a landfill main-
tained by Mangaluru City Corporation at Vamanjoor, located nearly 8.5 km from the center
of the city. As Vamanjoor is home for many educational institutes and also a residential area,
the spread of the contamination has to be analyzed. For this study, the aquifer considered is
a subbasin of the Gurupur basin. This study has focused on handling the data available in
the most efficient way to develop a consistent simulation model. The model was calibrated
successfully with RMSE value of observed versus simulated head as 0.32 m. The evaluation
of model was also done by comparing with the measured water head and chloride level from
the field on a seasonal basis. After validating successfully, the model was run to determine
the extent of contamination and also to forecast a scenario for maximum rainfall. The results
show that the contamination has spread to a distance of 1 km from the landfill and with
maximum rainfall the spread will be around 1.8 km from the landfill.
1 INTRODUCTION
Solid waste, when dumped into an uncontrolled landfill, can cause a serious threat to an under-
lying aquifer due to the migration of leachate generated from the landfill into the groundwater.
The amount and nature of waste dumped in a landfill depends upon various factors such as
inhabitants of the city, people’s lifestyle, food habits, standard of living, the degree of industri-
alization and commercialization of that area, culture and tradition of inhabitants, and also the
climate of the area. Due to unscientific collection, transportation and disposal of solid waste
without environmentally friendly methods such as composting, incineration and so on, dump-
ing of waste in India has become more chaotic. The leachate characteristics also depend upon
the pre-treatment of the solid waste such as separation of recyclable material like plastics, paper,
metals, glass and so on, as well as grinding or bailing of the waste (Kumar & Alappat, 2005).
As time progresses, organic components tend to degrade and become stable, whereas conserva-
tive elements such as various heavy metals, chloride, sulfide and so on, will remain long after
waste stabilization occurs. Metals are found in high concentration in leachate as they are usually
precipitated within the landfill. The disposal of waste generated from domestic and industrial
areas makes landfill sites a necessary component of a metropolitan life cycle. However, low-lying
disposal sites which are lacking in proper leachate collection systems, observation of landfill gas
and collection tools, are a potential threat to underlying groundwater resources.
Landfill sites are complex environments characterized by many interacting physical, chem-
ical and biological processes. Leachate from landfills exhibits a major potential environmen-
tal impact for groundwater and surface water pollution and represents a potential health
risk to both surrounding ecosystems and human populations. Mangaluru generates around
250 tons of solid waste every day, of which 200 tones is collected and disposed in the landfill
located at Vamanjoor. Vamanjoor, which is located 15 km from the city, is along a national
185
highway (NH13) and is home to many educational institutes. The dumping yard has an area
of 28.32 hectares which is poorly managed. In the vicinity, the ground water is getting pol-
luted because of contamination by leachate.
The groundwater flow model is simulated with the help of Visual MODFLOW, and the
movement of the contaminant subjected to a variety of boundary conditions has been simu-
lated by using MODFLOW, MT3DMS and MODPATH which are widely used (Rejani et al.,
2008; Da An et al., 2013). All the complex processes such as advection, dispersion and chemi-
cal reaction are well addressed with the software.
2 STUDY AREA
Vamanjoor is located in the Gurupur basin which covers an area of 841 km2 (Figure 1). The
river Gurupur is a major river flowing in a westerly direction in the Dakshina Kannada district
in Karnataka State. The basin covers the foothills of the Western Ghats; in the middle portion
lies the lateritic plateaus and a flat coastal alluvium at its mouth. The area lies between 12°50’ to
13°10’ north latitude and 74°0’ to 75°5’ east longitude. The study area is tropical, with a humid
type of climate and gets an annual average precipitation of 3,810 mm. From the previous stud-
ies conducted in the area, the aquifer is categorized as unconfined with rich lateritic formation
and having good groundwater potential (Harshendra, 1991). The transmissivity of the aquifer
and its specific yield was determined to be 10 and 213 m2/day and 7.85%, respectively.
3 METHODOLOGY
186
chloride of the subbasin. The model domain is enclosed by 1,622 active cells which includes
the total area with cell dimensions of 100 m × 100 m. The domain extends 30 m in the vertical
direction which is representing a mono layered unconfined aquifer. In order to make a dig-
ital elevation model (DEM), the contour lines of the toposheets numbered 48/L/13/NE and
48/L/13/NW (scale 1:25000) were considered. The spatial discretization details of the model are
given by X or longitudinal direction 485640 E and Y or latitudinal direction 1427956 N (with
respect to origin of UTM WGS 1984 Zone 43), the number of cells in X and Y direction being
41 (Figure 2). For modeling purposes, everyday time step was used. The discretization based
on time as well as space was arrived at after the first stages which were based on the precision
of the results.
187
4 RESULTS
The scatter plot of observed and model simulated values were plotted with the x and y axes
having the same intervals and a 1:1 trend line (or 45° line) was fitted diagonally at point (0,0)
across the plot area (Figure 4). The figure reveals that the model fits the observed groundwater
heads as all points are lying close to the diagonal line. The RMSE value for the observed and
simulated values of head was obtained as 0.325 m, which also reveals that it is a perfect fit and
can be used for further applications. The groundwater head thus obtained was compared with
a one third value of observation wells of the subbasin and also with the data from previous
research (Honnanagoudar, 2015). The graph showed a convincingly good agreement between
the observed and simulated head with a RMSE value of 0.625 m. The process of validation
was then carried out by taking the water head obtained from the three observation wells in the
area maintained by the Central Ground Water Board and the Department of Mines and Geol-
ogy, Government of Karnataka. An RMSE value of 1.15 m was obtained after analyzing the
observed and calibrated groundwater head. The results were found to be consistent with that
of the calibrated results and therefore, the model was considered to be reliable for future pre-
diction. After successful calibration and validation, parameters such as recharge co-efficient
10%, porosity 30%, bed conductance of 15 m/day, horizontal conductivity 7 m/day, specific
yield 7.85% and transmissivity of 213 m2/day were taken for future application of the model.
The hydrodynamic dispersivity was given initially and adjusted by a trial and error method
during calibration. Horizontal transverse dispersivity of one tenth was suggested by Cobaner
et al. (2012). As per Bhosale and Kumar (2001) for related coastal aquifer conditions, the value
of longitudinal dispersivity can be taken between 15 to 150 m. Similar to the calibration of the
188
flow parameters, calibration of the transport parameter was also completed. Only four wells
were located in the contaminated region which were chosen as observation wells. The chloride
level of the landfill leachate was identified as 6,000 mg/l during October 2016. The chloride
level of water in the observation wells was analyzed and compared with the simulated values.
After successful calibration, the transport parameter such as longitudinal and transverse dis-
persivity was taken as 25 m and 2.5 m, respectively. The model was validated by comparing the
values of that maintained by the Karnataka State Pollution Control Board during 2016. Since
the results were reliable, the model was taken for forecasting the future scenario.
189
Groundwater, which is one of the world’s most important natural resources, is under constant
threat due to various human activities. The purpose of the present study is to understand the
extent of contamination due to landfill by using the software MODFLOW and MT3DMS.
After successfully calibrating the model with a RMSE value for observed and simulation
head equal to 0.32 m, the model was applied for predicting various scenarios. The results of
the study show that the spread of the contaminant has reached almost a 1 km radius around
the landfill. Additionally, the model predicted that contamination could reach a distance of
1.8 km when the value of maximum annual rainfall of 4.75 m is included. Since the area of
the current study is home to many educational institutes and also a residential area, urgent
attention must be given to prevent the spread of the contaminants.
REFERENCES
Bhosale, D.D. & Kumar, C.P. Simulation of seawater intrusion in Ernakulam coast. Retrieved from:
http://www.angelfire.com/nh/cpkumar/publication/ernac.pdf. Accessed: 10/10/2017.
Cobaner, M., Yurtal, R., Dogan, A. & Motz, L.H. (2012). Three dimensional simulation of seawater
intrusion in coastal aquifers: A case study in the Goksu Deltaic Plain. Journal of Hydrology, 262–280.
Da An, Yonghai Jiang, Beidou Xi, Zhifei Ma, Yu Yang Queping Yang Mingxiao L, Jinbao Zhang
Shunguo Ba & Lei Jiang (2013). Analysis for remedial alternatives of unregulated municipal solid
waste landfills leachate-contaminated groundwater front. Earth Sci. 7(3), 310–319.
Dinesh Kumar Babu J. Alappat (2005). Evaluating leachate contamination potential of landfill sites
using leachate pollution index. Clean Technologies and Environmental Policy, 7, 190–197.
Harbaugh A.W., Banta, E.R., Hill, M.C. & Mc Donald, M.G. (2000). MODFLOW – 2000, the US Geo-
logical Survey Modular Groundwater Model—User guide to modularization concepts and ground
water flow process. US Geological Survey, Open File Report, 1–92.
Harshendra K (1991). Studies on water quality and soil fertility in relation to crop yield in selected river basins
of D.K District of Karnataka State (Ph.D. Thesis, Mangalore University, Karnataka, India, 146–147).
Honnanagoudar, S.A. (2015). Studies on aquifer characterization and seawater intrusion vulnerability
assessment of coastal Dakshina Kannada district, Karnataka (Ph.D. Thesis, National Institute of
Technology Karnataka, Surathkal India, 184–185).
Papadopoulou, M.P., Karatzas, G.P. & Bougioukou, G.G. (2007). Numerical modeling of the environ-
mental impact of landfill leachate leakage on groundwater quality – a field application. Environment
Modeling and Assessment, 12, 43–54.
R. Rejani, Madan K. Jha S.N. Panda R. Mull (2008). Simulation modeling for efficient groundwater
management in Balasore Coastal Basin, India. Water Resources Management, 22(1), 23–50.
Zheng, C. (2006). MT3DMS v 5.2. Supplemental user’s guide. Department of Geological Sciences, Uni-
versity of Alabama.
Zheng, C. & Wang, K. (1999). A modular three dimensional multispecies transport model for simulation of
advection, dispersion and chemical reactions of contaminants in groundwater systems. Contract Report
SERD 99–1, U.S. Army Corps of Engineers, United States.
190
1 INTRODUCTION
At present, the environmental management has become more complex (Purandara et al.
2012). This is mainly due to anthropogenic activities, rapid urbanization and fleeting growth
in agricultural fields, environmental mandates, recreational interests, hydropower generation,
over-allocation and land use patterns which results in climate change and fragmented nature
of available information (Purandara et al. 2012) (Welsh et al. 2013). Water quality modeling
of freshwaters is a trending research area in the present scenario which focuses on the evalu-
ation of biological and chemical status of the water bodies (Altenburger et al. 2015). Point
source pollution is found to be managed well compared to non-point source pollution. Non-
point source pollutants do not originate from a statutory point source, but dispersed into the
receiving water by various means. NPS pollutants include components from evapotranspi-
ration, percolation, interception, absorption, and vegetative covers which can bring about
changes in the hydrologic cycle, water balance, land surface characteristics and surface water
characteristics (Tong & Chen 2002) (LeBlanc et al. 1997) (Lai et al. 2011).
Water quality models are effective tools for understanding the fate and transport of contami-
nants in a river system (Wang et al. 2013). Many site-specific river basin models were developed
and used by the engineers in the early days for decision making purpose. Numerous general
water quality models were developed in the recent years for understanding various hydrological
processes. Some of them include, QUAL 2 K (Esterby 1996), WASP7, CE-QUAL-ICM (Chuco
2004) (Bahadur et al. 2013) (D., McKinnon, A., Brinkman, R., Trott, L., Undu, M., Muawanah
and Rachmansyah (2009)) (Cameira, M., Fernando, R., Ahuja, L. and Ma, L. (2007)), HEC-
RAS, MIKE11, DUFLOW, AQUASIM, DESERT, EFDC model (U.S. Environmental Protec-
tion Agency 1999) (The U.S. Environmental Protection Agency 1997), GSTAR-1D, CASC2D.
Computer models used in integrated water quality modeling are capable of combining
various spatial and environmental data for complex studies. To understand these water qual-
ity models, a basic knowledge on mathematical modeling is required. For example, mod-
els have been developed to know the fate of organic or inorganic contaminants transport
or to know the transport of nutrients, pesticide, sediment loss, erosion for informing land
191
∂C ∂C ∂ 2C
= −U + E 2 + Sc (1)
∂t ∂x ∂x
where, t is time, U is the steady-state average velocity of the water in the flow direction (x)
and E is the dispersion coefficient. U and E are assumed to be in the direction of flow (x) of
the water body. In a great number of water bodies, eq. (1) can be applied where flow can be
approximated as one-dimensional. First term represents the rate of change of concentration
(C) of a component, second term represents advection (first order), third term represents
dispersion (second order) and the fourth term represents the source term which incorpo-
rate inflows, outflows, and reactions due to physical, chemical, and/or biological processes
(Stamou & Rutschmann 2011).
Eq. (1) can be solved numerically using finite difference method. The source term may
vary according to the type of water body under consideration (Stamou & Rutschmann 2011)
(Gelda & Effler 2000).
192
the quantity of nutrients that are lost from the top-most soil layers instead of being absorbed
by plant roots.
2.3.1 FARMSIM
The model is developed especially for the complex, dynamic systems with many interacting
biophysical components along a wide range of soil, climatic and socioeconomic conditions
(Wijk et al. 2004).
2.3.2 NUANCES-FARMSIM
Nutrient Use in ANimal and Cropping systems: Efficiencies and Scales-FARM SIMula-
tor is an integrated crop—livestock model developed to analyse African smallholder farm
systems.
2.3.3 SWATRE
A transient one-dimensional finite-difference model for vertical unsaturated flow with water
uptake by roots is presented. In the model a number of boundary conditions are given for
both top and bottom of the system. At the top, 24-hr. data on rainfall, potential soil evapora-
tion and potential transpiration are needed. When the soil system remains unsaturated, one
of three bottom boundary conditions can be used: pressure head, zero flux or free drainage.
When the lower part of the system remains saturated, one can either give the groundwater
level or the flux through the bottom of the system as input. In the latter case the groundwater
level is computed (Belmans et al, 1983).
2.3.4 ANIMO
Simulation of the nitrogen behaviour in the soil and the nitrogen uptake by winter wheat was
performed using the model ANIMO (Rijtema & Kroes, 1991). It is a detailed process ori-
ented simulation model for evaluation of nitrate leaching to groundwater, N- and P-loads on
surface waters and Green House Gas emission. The model is primarily used for the ex-ante
evaluation of fertilization policy and legislation at regional and national scale. The output of
SWATRE model is taken as an input for the ANIMO mode.
2.3.5 OVERSEER
The OVERSEER nutrient budget model is a decision support tool (Wheeler et al. 2003) to assist
farmers and consultants develop nutrient plans (Wheeler et al. 2006). It produces estimates of
long-term average nutrient losses via drainage and runoff at a farm and farm block level. Over-
seer also estimates greenhouse gas emissions and aids in planning fertiliser applications.
193
2.3.6 AMAIZEN
The AmaizeN is a maize growth simulation software. The core of the system is the daytime
step simulation model of maize growth and development, driven by solar radiation and inter-
acting with soils. This model is an extension of the maize growth simulation model which is
modified for cooler situations.
2.3.7 APSIM
Agricultural Production System Simulator is a software system which allows models of crop
and pasture production, residue decomposition, soil water and nutrient flow, and erosion to
be readily re-configured to simulate various production systems and soil and crop manage-
ment to be dynamically simulated using conditional rules (Mccown et al. 1996). The main
objective behind the development of the model was to simulate biophysical process in farm-
ing systems, in particular where there is interest in the economic and ecological outcomes of
management practice in the face of climatic risk (Keating et al. 2003). The model simulates
the dynamics of soil-/plant-management interactions within a single crop or a cropping sys-
tem (Wang et al. 2002).
2.3.8 MitAgator
MitAgator is a farm scale GIS (Geographic Information System) based DST (Decision Sup-
port Tool) that has been developed to identify and estimate nitrogen, phosphorus, sediment,
and E. coli loss spatially across a farm landscape. This model is a spatially explicit model that
extends the results produced by Overseer to identify where on the farm property nutrient loss
is occurring (Anastasiadis et al. 2013).
2.3.9 SPASMO
Soil Plant Atmosphere System Model is a physics based model of plant growth and nitro-
gen leaching, with some estimation of phosphorus. It focuses on arable and 15 horticultural
activities. SPASMO produces estimates of crop production, drainage, and nutrient leaching
from soil on a daily time step. Its inputs include spatial data on climate, irrigation, soil prop-
erties, land use, and crop and stock management (Anastasiadis et al. 2013).
3 CONCLUSIONS
A brief description of numerical water quality models has been explained. Farm scale models
and the use of models for understanding nutrient loss to waterways have been discussed. The
practice of modelling is well established among scientists as it provides an effective way to
think about and understand complex phenomena. Models provide structure to guide new
194
REFERENCES
Argent, R., Sojda, R., Giupponi, C., McIntosh, B., Voinov, A., & Maier, H. (2016). Best practices for
conceptual modelling in environmental planning and management. Environmental Modelling & Soft-
ware, 80, 113–121. http://dx.doi.org/10.1016/j.envsoft.2016.02.023.
Bahadur, R., Ziemniak, C., Amstutz, D., & Samuels, W. (2013). Global River Basin Modeling and
Contaminant Transport. American Journal of Climate Change, 02(02), 138–146. http://dx.doi.
org/10.4236/ajcc.2013.22014.
Belmans, C., Wesseling, J., & Feddes, R. (1983). Simulation model of the water balance of a cropped soil:
SWATRE. Journal of Hydrology, 63(3–4), 271–286. http://dx.doi.org/10.1016/0022-1694(83)90045-8.
Cameira, M., Fernando, R., Ahuja, L. and Ma, L. (2007). Using RZWQM to simulate the fate of nitro-
gen in field soil–crop environment in the Mediterranean region. Agricultural Water Management,
90(1–2), pp.121–136.
Cherry, K., Shepherd, M., Withers, P. and Mooney, S. (2008). Assessing the effectiveness of actions
to mitigate nutrient loss from agriculture: A review of methods. Science of the Total Environment,
406(1–2), pp.1–23.
Cunderlik, J. (2007). River Bank Erosion Assessment using 3D Hydrodynamic and Sediment Transport
Modeling. Journal of Water Management Modeling. http://dx.doi.org/10.14796/jwmm.r227-02.
Esterby, S. (1996). Review of methods for the detection and estimation of trends with emphasis on
water quality applications. Hydrological Processes, 10(2), 127–149. http://dx.doi.org/10.1002/
(sici)1099-1085(199602)10:2<127::aid-hyp354>3.0.co;2–8.
Gelda, R., & Effler, S. (2003). Application of a Probabilistic Ammonia Model: Identification of Impor-
tant Model Inputs and Critique of a TMDL Analysis for an Urban Lake. Lake and Reservoir Man-
agement, 19(3), 187–199. http://dx.doi.org/10.1080/07438140309354084.
Guagnano, D., Rusconi, E., & Umiltà, C. (2013). Joint (Mis-)Representations: A Reply to Welsh et al.
(2013). Journal of Motor Behavior, 45(1), 7–8. http://dx.doi.org/10.1080/00222895.2012.752688.
Gyawali, S., Techato, K., Monprapussorn, S., & Yuangyai, C. (2013). Integrating Land Use and Water
Quality for Environmental based Land Use Planning for U-tapao River Basin, Thailand. Procedia—
Social and Behavioral Sciences, 91, 556–563. http://dx.doi.org/10.1016/j.sbspro.2013.08.454.
Hashemi, F., Olesen, J., Dalgaard, T. and Børgesen, C. (2016). Review of scenario analyses to reduce
agricultural nitrogen and phosphorus loading to the aquatic environment. Science of the Total Envi-
ronment, 573, pp. 608–626.
Holzbecher, E. (2012). Environmental modeling. Heidelberg: Springer.
Keating, B., Carberry, P., Hammer, G., Probert, M., Robertson, M., & Holzworth, D. et al. (2003). An
overview of APSIM, a model designed for farming systems simulation. European Journal of Agro
nomy, 18(3–4), 267–288. http://dx.doi.org/10.1016/s1161-0301(02)00108-9.
McKinnon, D., A. Brinkman, R. Trott, L. Undu, M. Muawanah and Rachmansyah (2009). The fate
of organic matter derived from small-scale fish cage aquaculture in coastal waters of Sulawesi and
Sumatra, Indonesia. Aquaculture, 295(1–2), pp. 60–75.
Mudgal, A., C. Baffaut, S.H. Anderson, E.J. Sadler and A.L. Thompson (2010). APEX Model Assess-
ment of Variable Landscapes on Runoff and Dissolved Herbicides. Transactions of the ASABE,
53(4), pp.1047–1058.
Purandara, B., Varadarajan, N., Venkatesh, B., & Choubey, V. (2011). Surface water quality evalua-
tion and modeling of Ghataprabha River, Karnataka, India. Environmental Monitoring and Assess-
ment, 184(3), 1371–1378. http://dx.doi.org/10.1007/s10661-011-2047-1.
Rabi, A., Hadzima-Nyarko, M., & Šperac, M. (2015). Modelling river temperature from air temper-
ature: case of the River Drava (Croatia). Hydrological Sciences Journal, 60(9), 1490–1507. http://
dx.doi.org/10.1080/02626667.2014.914215.
Rijtema, P., & Kroes, J. (1991). Some results of nitrogen simulations with the model ANIMO. Fertilizer
Research, 27(2–3), 189–198. http://dx.doi.org/10.1007/bf01051127.
Shukla, J., Hallam, T., & Capasso, V. Mathematical modelling of environmental and ecological systems.
195
196
A.R. Laiju
National Institute of Technology, Srinagar, Uttarakhand, India
S. Sarkar
Indian Institute of Technology, Roorkee, Uttarakhand, India
ABSTRACT: This study investigated the performance of two anion exchange resins
and also synthesized and evaluated the performance of a hybrid material consisting of a
strong base anion exchange resin dispersed with FeS (iron sulphide) nanoparticles for the
removal of hexavalent chromium, Cr(VI). The synthesis of the hybrid was carried out by
using an in situ process, where the strong base anion exchange resin serves as a nanoreac-
tor and provides a confined medium for synthesis. They stabilize and isolate the synthe-
sized nanoparticles preventing their aggregation. Equilibrium batch studies, adsorption
isotherm and column studies were performed to determine the maximum uptake capacity
for Cr(VI). Comparison of a fixed bed column run between the hybrid material and par-
ent resin confirmed that the Cr(VI) was selectively removed and the hybrid showed higher
capacity. The wide availability of resin and low-cost chemicals for synthesis and regen-
eration will make hybrid material an attractive option for the removal of Cr(VI) from
contaminated water.
1 INTRODUCTION
Chromium is unique among toxic heavy metals in the environment and toxicity is regu-
lated on the basis of its oxidation state and total concentration. In water distribution
systems and in water treatment processes, chromium exists mainly as trivalent chromium,
Cr(III) and hexavalent chromium, Cr(VI) (Kimbrough et al., 1999). Cr(VI), an inorganic
contaminant which received public attention recently, can be considered as a potential
human carcinogen (International Agency for Research on Cancer, 2012). Public concern
and potential adverse health effects prompt the investigation of Cr(VI) in drinking water
supplies below Maximum Contaminant Level (MCL). In India, the maximum permis-
sible limit for Cr(VI) in drinking water is 50 µg/L. Most of the regulations by different
agencies are based on total chromium not just Cr(VI), due to the absence of sufficient
risk analysis for the oral intake of Cr(VI) (Anderson, 1997). To reduce the contamination
of Cr(VI) from water and wastewater, various methods have been used mainly involv-
ing reduction followed by precipitation, adsorption, ion exchange and membrane process
(Sharma et al., 2008).
The objectives of the present study are to evaluate the performance of a strong base anion
exchange resin, Amberlite IRA 400 (IRA 400), a weak base anion exchange resin, Lewatit
MP 64 (LMP 64) and to synthesize a novel Hybrid Ion exchange Material (HIM) and to
validate the performance of trace concentration of Cr(VI) from contaminated water. The
physical properties of the resin are shown in Table 1.
197
2 EXPERIMENTAL METHODOLOGY
C0 − Ce
η (%) = × 100 (3)
C0
V (C0 − Ce )
qe = (4)
m
198
analyzed for Cr(VI) concentration by the 1,5 diphenylcarbazide method by using a split beam
UV visible spectrophotometer (T60 UV, PG Instruments, UK) at 540 nm.
3.3 Effect of pH
From Figure 5, the maximum uptakes of 11.08, 7.01 and 14.41 mg/gm were obtained for
IRA 400, LMP 64 and HIM at pH 4. For resin and hybrid, the uptake, as well as efficiency,
increases as the pH decreases.
199
From the predominance diagram, H2CrO4 is predominant for a pH less than 1, HCrO4− for
a pH between 1 and 6.5 and divalent CrO42− for a pH above 6.5. Dimerization of HCrO4− ions
(Cr2O72−) is possible if the concentration is higher than 1 g/L (Saleh et al., 1989). The increased
Cr(VI) removal efficiency at an acidic pH is mainly due to the fact that HCrO4−, being mono-
valent, can attach to a single ion exchange functional group and CrO42−, a divalent, needs to
bind to two ion exchange functional groups (Sengupta & Clifford, 1986). At an alkaline pH,
the sorption trend is likely decreased due to the competition between CrO42− and OH− for the
binding sites on the exchanger which results in lower uptake. The type of functional group,
quaternary ammonium moiety of IRA 400, has a significant effect on the uptake of Cr(VI)
rather than the tertiary amine of LMP 64 (Pehlivan & Cetin, 2009; McGuire et al., 2007).
1 1 1
= + (5)
q e Q max bC e Q max
1
lo qq e = log C e + lo qk f (6)
n
201
202
F=
qCr ,t
=
(CCr , 0 − CCr ,r ) (7)
qcr .e (CCr.0 − CCr.e )
203
4 CONCLUSION
Iron sulphide impregnated hybrid material were synthesized by in-situ synthesis process on func-
tional polymer supported host strong base anion exchangeres in. The removal process was found
to be strongly dependent on the pH, dosage, initial concentration and sulfate concentration.
HIM shows high selectivity toward Cr(VI), with a high adsorption capacity found by fitting the
Freundlich model. A fixed bed column containing HIM with feed water composition of 200
µg/L of Cr(VI), sulphate, chloride and bicarbonate of 100 mg/L can treat up to 5,500 BVs before
reaching Cr (VI) concentration corresponding to Indian standards (50 µg/L). The column study
identified that the rate limiting step is the intraparticle solute transport mechanism. A fixed bed
column study and the possibility of regeneration shows that HIM can be considered as a prom-
ising hybrid material for the trace removal Cr(VI) from contaminated water.
REFERENCES
Anderson, R.A. (1997). Chromium as an essential nutrient for humans. Regulatory Toxicology and
Pharmacology, 26(26), s35–s41.
Babu, B.V. & Gupta, S. (2008). Adsorption of Cr(VI) using activated neem leaves: Kinetic studies.
Adsorption,14(1), 85–92.
Balan, C., Volf, I. & Bilba, D. (2013). Chromium (VI) removal from aqueous solutions by purolite
base anion-exchange resins with gel structure. Chemical Industry and Chemical Engineering Quar-
terly,19(4), 615–628.
Chanthapon, N., Sarkar, S., Kidkhunthod, P. & Padungthon, S. (2017). Lead removal by a reusable gel cat-
ion exchange resin containing nano-scale zero valent iron. Chemical Engineering Journal, 331, 545–555.
DeMarco, M.J., Sengupta, A.K. & Greenleaf, J.E. (2003). Arsenic removal using a polymeric/inorganic
hybrid sorbent. Water Research, 37(1), 164–176.
Greenleaf, J.E., Lin, J.C. & Sengupta, A.K. (2006). Two novel applications of ion exchange fibers:
Arsenic removal and chemical-free softening of hard water. Environmental Progress, 25(4), 300–311.
International Agency for Research on Cancer. (2012). IARC Monograph: Chromium (VI) Compounds.
147–168.
Kimbrough, D.E., Cohen, Y., Winer, A.M., Creelman, L. & Mabuni, C. (1999). A critical assessment of
chromium in the environment. Critical Reviews in Environmental Science and Technology, 29(1), 1–46.
Li, P. & Sengupta, A.K. (2000). Intraparticle diffusion during selective ion exchange with a macropo-
rous exchanger. Reactive and Functional Polymers, 44(3), 273–287.
McGuire, M.J., Blute, N.K., Qin, G., Kavounas, P., Froelich, D. & Fong, L. (2007). Hexavalent chro-
mium removal using anion exchange and reduction with coagulation and filtration. Water Research
Foundation, Project #3167, 140.
Neagu, V., Untea, I., Tudorache, E. & Luca, C. (2003). Retention of chromate ion by conventional and
N-ethylpyridinium strongly basic anion exchange resins. Reactive and Functional Polymers, 57(2–3),
119–124.
Pehlivan, E. & Cetin, S. 2009. Sorption of Cr(VI) ions on two Lewatit-anion exchange resins and their
quantitative determination using UV-visible spectrophotometer. Journal of Hazardous Materials,
163(1), 448–453.
Saleh, F.Y., Parkerton, T.F., Lewis, R.V., Huang, J.H. & Dickson, K.L. (1989). Kinetics of chromium
transformation in the environment. Science of the Total Environment, 86, 25–41.
Sarkar, S., Guibal, E., Quignard, F. & Sengupta, A.K. (2012). Polymer-supported metals and metal
oxide nanoparticles: Synthesis, characterization, and applications. Journal of Nanoparticle Research,
14(2), 1–24.
Sengupta, A.K. & Clifford, D. (1986). Important process variables in chromate ion exchange. Environ-
mental Science & Technology, 20(2), 149–55.
Sharma, S.K., Petrusevski, B. & Amy, G. (2008). Chromium removal from water: A review. Journal of
Water Supply: Research and Technology—AQUA, 57(8), 541–553.
204
ABSTRACT: One of the most significant problems in urban India is poor Solid Waste
Management (SWM) systems that are leading to environmental, health and economical
issues. The aim of this paper is to provide an insight into the SWMs and practices in India,
and the challenges faced. The review focuses on three major areas: solid waste characteris-
tics; SWM systems; and decision-making. The two major issues at source are the increasing
amount of waste, and the changing composition of the waste streams. The issues in waste
logistics are due to poor waste segregation, unregulated waste transportation, and incom-
patible and outdated transportation technologies. Sustainable waste processing and disposal
practices are yet to be adopted and are facing numerous implementation and operational
challenges. There is a need to understand the operational issues in waste processing and dis-
posal so that better technological development and adoption at a macro level is made possi-
ble. The decision-making practices and solutions generated in India are unscientific or based
on piece meal approaches. Inefficiencies in waste management systems are primarily due to
poor alignment between requirements, constraints and operations in the SWM system. A sci-
entific SWM design that meets the fundamental requirements to handle a variety of waste, to
be environmentally friendly, and is economically and socially acceptable, is needed. A robust
SWM system design places emphasis on active public participation in decision-making, and
operational integration of its subcomponents.
1 INTRODUCTION
Solid waste generation, and its management, is a worldwide problem. In India, it is antici-
pated that about 260–300 million tons of solid waste per annum will be generated by 2050
(Joshi & Ahmed, 2016). This is primarily due to population growth, increasing urbanization
and socio-economic development. Solid waste leads to different environmental problems and
ecosystem changes, human health issues, and socio-economic issues. There is an increasing
focus on using Solid Waste Management (SWM) systems in the movement toward an envi-
ronmentally sustainable society. Over the last decades, there have been continuous changes
in the solid waste management practices evolving from the simple form of collection and
dumping, to integrated solid waste management arising from learned experiences. However,
there is a need to understand the operational issues too.
The aim of this paper is to provide an insight into the SWM systems and practices in
India.
2 LITERATURE REVIEW
Peer-reviewed articles published since 2000 from Scopus and Google Scholar databases
were selected using the keywords ‘e-waste’, ‘municipal solid waste’, ‘solid waste manage-
ment’, ‘waste recycling’, ‘life cycle assessment’, ‘waste disposal’, ‘environment assessment’,
and ‘multi-criteria decision-making’. The articles were included after careful review of the
abstracts.
205
206
207
2.2.3 Recycling
Ragpickers affect the material flow of the waste streams in terms of segregation and recovery
of valuable materials from open dumping sites, community storage bins or from municipal
landfill sites. The material recovery from the rag pickers or from waste treatment plants then
reaches the recyclers. The material recovered moves into subsequent production cycle. Recy-
clable materials recovered include paper, plastic, glass, metals, and e-waste (Joshi & Ahmed,
2016; Nandy et al., 2015; Sharholy et al., 2008). Rag pickers, scrap dealers, waste traders and
recycling plants are the elements involved in recycling, and recycling points include houses,
open dumps, bus/train stations and municipal landfills. With respect to the value generated
from the recycling process, Nandy et al. (2015) and Sharholy et al. (2008) take a positive view
while Gupta et al. (2015) takes a negative view.
2.3 Decision-making
SWM systems aim for overall waste management which is the best environmentally, eco-
nomically sustainable for a particular region, and socially acceptable (Agarwal et al., 2015;
Guerrero et al., 2013). A balanced coordination among different factors (social, institutional,
environmental, financial and technical) is needed to achieve an optimal waste management
plan (Guerrero et al., 2013; Srivastava et al., 2014). Decision makers need to be well informed
when developing integrated waste management strategies that are adapted to the needs of a
city, and consider the ability of citizens to pay for the services (Guerrero et al., 2013). The
concept of sustainable waste management is gaining more focus and there is emphasis on
multi-criteria decision analysis.
Decisions pertaining to SWM systems are complex as multiple criteria and multi-actors
are involved. Facility location problems for transfer stations or treatment plant locations
are solved using geographical information systems, mixed integer linear programs, analytic
hierarchy process (ANP) and Technique for Order of Preference by Similarity to Ideal Solu-
tion (TOPSIS) techniques (Choudhary & Shankar, 2012; Khan & Samadder, 2015; Sumathi
208
3 CONCLUSION
This paper provides an insight into the SWM systems and practices in India, and the chal-
lenges faced. The review focused on three major areas: solid waste characteristics; SWM
systems; and decision-making. Based on the literature review, it is evident that SWM is a
complex and challenging problem. The two major waste management challenges are the
increasing amount of waste and the changing composition of the waste stream. Waste seg-
regation, unregulated waste transportation, and incompatible and outdated transportation
technologies result in a poor waste logistics system. There is a need to understand the opera-
tional issues in waste processing and disposal so that better technological development and
adoption, at a macro level, is made possible.
Decision-making processes pertaining to SWM systems are complex because multiple cri-
teria are involved. Invariably, the decision-making practices and solutions generated in India
are unscientific or based on piecemeal approaches. But recent literature provides evidence in
the use of multi-criteria decision-making techniques. The poor performance of SWM systems
are primarily due to poorly aligned requirements, and constraints and operations. A scientific
SWM design is needed which meets the fundamental requirements of being able to handle
209
REFERENCES
Agarwal, R., Chaudhary, M. & Singh, J. (2015). Waste management initiatives in India for human well
being. European Scientific Journal, 11(10).
Central Pollution Control Board (CPCB, 2013). Status report on municipal solid waste management.
Retrieved from http://www.cpcb.nic.in/divisionsofheadoffice/pcp/MSW_Report.pdfhttp://pratham.
org/images/paper_on_ragpickers.pdf.
Choudhary, D. & Shankar, R. (2012). An STEEP-fuzzy AHP-TOPSIS framework for evaluation and
selection of thermal power plant location: A case study from India. Energy, 42(1), 510–521.
Debnath, S. & Bose, S.K. (2014). Exploring full cost accounting approach to evaluate cost of MSW
services in India. Resources, Conservation and Recycling, 83, 87–95.
European Business and Technology Centre (EBTC, 2014). Snapshot: Waste management in India.
Retrieved from http://ebtc.eu/index.php/knowledge-centre/publications/environment-publications/
174-sector-snapshots-environment/255-waste-management-in-india-a-snapshot.
Guerrero, L.A., Maas, G. & Hogland, W. (2013). Solid waste management challenges for cities in devel-
oping countries. Waste Management, 33(1), 220–232.
Gupta, N., Yadav, K.K. & Kumar, V. (2015). A review on current status of municipal solid waste man-
agement in India. Journal of Environmental Sciences, 37, 206–217.
Joseph, K., Rajendiran, S., Senthilnathan, R. & Rakesh, M. (2012). Integrated approach to solid waste
management in Chennai: An Indian metro city. Journal of Material Cycles and Waste Manage-
ment, 14(2), 75–84.
Joshi, R. & Ahmed, S. (2016). Status and challenges of municipal solid waste management in India: A
review. Cogent Environmental Science, 2(1), 1139434.
Kalyani, K.A. & Pandey, K.K. (2014). Waste to energy status in India: A short review. Renewable and
Sustainable Energy Reviews, 31, 113–120.
Khan, S. & Faisal, M.N. (2008). An analytic network process model for municipal solid waste disposal
options. Waste Management, 28(9), 1500–1508.
Khan, D. & Samadder, S.R. (2015). A simplified multi-criteria evaluation model for landfill site ranking
and selection based on AHP and GIS. Journal of Environmental Engineering and Landscape Manage-
ment, 23(4), 267–278.
Kharat, M.G., Raut, R.D., Kamble, S.S. & Kamble, S.J. (2016). The application of Delphi and AHP
method in environmentally conscious solid waste treatment and disposal technology selection. Man-
agement of Environmental Quality: An International Journal, 27(4), 427–440.
Kiddee, P., Naidu, R. & Wong, M.H. (2013). Electronic waste management approaches: An overview.
Waste Management, 33(5), 1237–1250.
Kumar, S., Bhattacharyya, J.K., Vaidya, A.N., Chakrabarti, T., Devotta, S. & Akolkar, A.B. (2009).
Assessment of the status of municipal solid waste management in metro cities, state capitals, class I
cities, and class II towns in India: An insight. Waste Management, 29(2), 883–895.
Nandy, B., Sharma, G., Garg, S., Kumari, S., George, T., Sunanda, Y. & Sinha, B. (2015). Recovery of
consumer waste in India—A mass flow analysis for paper, plastic and glass and the contribution of
households and the informal sector. Resources, Conservation and Recycling, 101, 167–181.
Nixon, J.D., Dey, P.K., Ghosh, S.K. & Davies, P.A. (2013). Evaluation of options for energy recovery
from municipal solid waste in India using the hierarchical analytical network process. Energy, 59,
215–223.
Pandyaswargo, A.H. & Premakumara, D.G.J. (2014). Financial sustainability of modern composting:
The economically optimal scale for municipal waste composting plant in developing Asia. Interna-
tional Journal of Recycling of Organic Waste in Agriculture, 3(3), 1–14.
Parthan, S.R., Milke, M.W., Wilson, D.C. & Cocks, J.H. (2012). Cost function analysis for solid waste
management: A developing country experience. Waste Management & Research, 30(5), 485–491.
Phillips, J., & Mondal, M. K. (2014). Determining the sustainability of options for municipal solid waste
disposal in Varanasi, India. Sustainable Cities and Society, 10, 11–21.
Rathi, S. (2007). Optimization model for integrated municipal solid waste management in Mumbai,
India. Environment and Development Economics, 12(1), 105–121.
Sharholy, M., Ahmad, K., Mahmood, G. & Trivedi, R.C. (2008). Municipal solid waste management in
Indian cities—A review. Waste Management, 28(2), 459–467.
210
211
ABSTRACT: Since independence, the Indian government has been trying to electrify all
rural areas—a daunting task. Bihar, with less than 50% of households electrified, has ambi-
tious plans for increased solar power use. This study compared the environmental and eco-
nomic benefits of centralized and decentralized solar power options to electrify Bihar’s rural
households. A centralized scenario with utility-scale, photovoltaic plants was compared with
decentralized residential rooftop photovoltaic systems. A comparative environmental and cost
life cycle assessment was conducted with a functional unit of 1 kWh electricity to a rural
household in Bihar. The centralized scenario had lower environmental impacts and costs.
However, Bihar’s electricity consumption is mainly residential, which could lead to unutilized
electricity. Considering this made the centralized scenario the worse option. This study tried to
understand the effect of electricity consumption profiles on a system’s environmental impacts
and costs and the role it plays in policy decisions regarding generation capacity increases.
1 INTRODUCTION
One of the main goals of the Indian Government, since independence, has been to provide
electricity to all its households, especially rural ones. At the village level, six of the 31 Indian
states have 100% electrification. However, some other states are far behind. One such state is
Bihar. With around 47% of households electrified, the state is planning to increase its gen-
eration capacity and transmission and distribution infrastructure (Open Government Data
Platform of India, 2017). The electrification process has been slow, leading to gaps in supply,
which are being met by mushrooming generator businesses (Oda, 2012).
At present, India is trying to increase the renewable fraction of its energy mix and is aggres-
sively pursuing solar power. The National Solar Mission has a revised aim of deploying 100,000
MW of grid-connected solar power by 2022 (Press Trust of India, 2015). Bihar has also aimed
at increasing its installed capacity and is looking forward to solar as its main option (Verma,
2017). The Bihar Government is looking at centralized, utility-scale solar power plants based
on photovoltaic (PV) technology as an answer. Two locations, Kajra and Pirpainti, which had
been chosen for thermal power plants are now being considered for PV power plants (Verma,
2017). At the same time, India is also trying to increase rooftop PV installations by providing
incentives (Ministry of New and Renewable Energy, 2017c). Many studies have compared
decentralized PV systems to other methods of power generation, finding them superior eco-
nomically and environmentally (Gmünder et al., 2010; Molyneaux et al, 2016). A comparison
between centralized and decentralized PV systems has not been conducted.
The present study compared solar PV installations in centralized, utility-scale and decen-
tralized, rooftop scenarios to provide electricity for Bihar’s rural households. The total capac-
ity was assumed to be 400 MWp, similar to what might be installed at Kajra and Pirpainti.
The two scenarios were compared using Life Cycle Assessment (LCA) to understand the
213
2 METHODS
Usage Usage
Power Quantity duration Quantity duration
consumption per HH per day Season per HH per day Season
214
215
Table 4. System boundaries for each scenario with a comparison of demand and PV generation.
217
4 CONCLUSION
This study compared rural electrification options for Bihar using PV technology, with the
existing grid as the baseline. Overall, the best scenario in terms of GWP, CED and LCOE
was the centralized PV system. However, in Bihar, such a system could lead to significant
wastage of electricity, increasing the GWP, CED and LCOE compared to the off-grid decen-
tralized PV system. This must be considered when comparing rural electrification options as
most of Bihar’s consumption is from the residential sector, with a peak demand at times of
low or no solar insolation.
The electricity consumption profile could impact the system’s environmental impacts and
costs, turning a seemingly best scenario to the worst, because of unutilized generated elec-
tricity. It plays a crucial role in policy decisions and should be considered when looking at
capacity increases for energy generation.
Previous studies have also cited high connection costs leading to low connection rates
(Cook, 2011), which is the case when there is no correlation between the consumption and
generation profiles. Electricity would be unaffordable and hence, inaccessible to the poor.
However, the cost might be reduced by increasing industrial development in the region.
Future studies should be aimed to understand the effect of such a development in the area
and if the benefits of a rural electrification program would reach the intended population,
and what policies would aid this endeavor.
REFERENCES
B.I.S. IS 875–1987, “Code of Practice for design loads (other than earthquake) for building and struc-
ture”, Part, 3. New Delhi.
Blum, N.U., Sryantoro Wakeling, R., & Schmidt, T.S. (2013). Rural electrification through village
grids—Assessing the cost competitiveness of isolated renewable energy technologies in Indonesia.
Renewable and Sustainable Energy Reviews, 22, 482–496.
Central Electricity Authority. (2016). Bihar: Transmission System Study for 13th Plan. New Delhi.
Central Electricity Authority. (2017). Power Sector: Executive Summary for the Month of Jan, 2017.
New Delhi.
Chandrasekaran, K. (2017, July 20). Crashing solar tariffs crush storage plans. The Economic Times.
New Delhi.
Clean Development Mechanism Executive Board. (2011). Guidelines on the Assessment of Investment
Analysis. Bonn.
Cook, P. (2011). Infrastructure, rural electrification & development. Energy for Sustainable Develop-
ment, 15(3), 304–313.
Deo, P., Jayaraman, S., Verma, V.S., & Dayalan, M.D. (2010a). Benchmark Capital Cost for 400/765 kV
Transmission Lines. New Delhi.
Deo, P., Jayaraman, S., Verma, V.S., & Dayalan, M.D. (2010b). Benchmark Capital Cost for Substation
associated with 400/765 kV Transmission System. New Delhi.
Fu, R., Feldman, D., Margolis, R., Woodhouse, M., & Ardani, K. (2017). U.S. Solar Photovoltaic Sys-
tem Cost Benchmark : Q1 2017. National Renewable Energy Laboratory. Golden, CO.
Gakkhar, N. (2017). Benchmark Cost for “Grid Connected Rooftop and Small Solar Power Plants Pro-
gramme” for the year 2017−18. New Delhi.
Gmünder, S.M., Zah, R., Bhatacharjee, S., Classen, M., Mukherjee, P., & Widmer, R. (2010). Life cycle
assessment of village electrification based on straight jatropha oil in Chhattisgarh, India. Biomass
and Bioenergy, 34(3), 347–355.
218
219
ABSTRACT: Friendly sodium silicate and promoters, which are compatible with cement
are used to obtain improved soil properties. The possibility of using cement and sodium sili-
cate admixed with composite promoters to improve the strength of soft clay was analysed in
the present study. The influential factors involved in this study are the proportion of sodium
silicate binding agent and the curing time. The unconfined compressive strength of stabilized
clay at different ages is tested. Based on literature study, the selected composite promoters for
the present study comprises of CaCl2 & NaOH. For the ordinary Portland cement (OPC) and
sodium silicate admixed with composite promoters system, the permeation of the CaCl2 and
NaOH solutions is expected to facilitate the precipitation of Ca(OH)2 at a molar ratio 1:1
and found significantly improves strength of soft clay. More importantly, it is found that the
selected clay stabilizer in much less dosage is needed to achieve the equivalent improvement in
strength compared with cement, hence can be a more effective and ecofriendly clay stabilizer.
1 INTRODUCTION
Construction of buildings and other civil engineering structures on weak or soft soil is highly
risky because such soil is susceptible to differential settlements due to its poor shear strength
and high compressibility. Soil stabilization is the process of improving the physical and
engineering properties of problematic soils to some predetermined targets. Sodium silicates
have been widely used as supplementary cementing materials substituting ordinary Portland
cement to improve the soil properties. OPC is the most commonly used stabilizer since it is
readily available at reasonable cost. Nevertheless, a major issue with using OPC is that its
production processes are energy intensive and emit a large quantity of CO2. To improve the
environmental acceptability and to reduce the construction cost of the deep mixing method,
the partial replacement of the cement by supplementary cementing materials such as sodium
silicate is one of the best alternative ways.
Application of sodium silicate for geotechnical works has been reported by many research-
ers. Used as a component of soil stabilizer, sodium silicate has unique advantages:
(i) reliable and proven performance, (ii) safety and convenient for construction, and
(iii) environmental acceptability and compatibility (Rowles and O’Connor, 2003; Ma et al.,
2014). In order to investigate the possibility of using cement and sodium silicate admixed
with composite promoters to improve the strength of soft clayey soil, Thonnakkal soft clay
is considered. These deposits are composed of silty clay, having extremely low shear strength
and high compressibility.
The aim of this present study is to achieve an OPC-based clay stabilizer which has the
equivalent enhancement of the mechanical properties as a higher content of OPC. The effect
of sodium silicate on the strength development of samples stabilized with OPC and compos-
ite promoter was investigated. The unconfined compressive strength was used as a practical
indicator to investigate the strength development. The binders consisting of OPC, sodium
silicate, and composite promoters. The present study aimed to obtain an optimum dosage of
223
2 MATERIALS
2.2 Cement
A 43 grade ordinary Portland cement (OPC) was used in this study. Properties are listed in
Table 2.
3 METHODOLOGY
224
225
Figure 3. Stress-strain curve for samples treated with different sodium silicate dosage for a curing
period of 14 days.
sodium silicate. A percentage increase of 15.35% in dry density and a percentage decrease of
26.7% in moisture content was observed in samples treated with 1% sodium silicate additive.
Table 4. Unconfined compressive strength of different samples for varying curing periods.
Sample proportion 0th day 3rd day 7th day 14th day
Figure 4. Effect of curing on strength development for samples treated with varying sodium silicate
dosages.
227
period and a significant increase in strength is found after 7 days of curing. More brittle and
a sudden failure was observed with curing.
5 CONCLUSION
The conclusions made based on the present study on clayey soil are:
This paper analysed the strength development in OPC and sodium silicate-stabilized clay
with composite promoter (NaOH & CaCl2). NaOH and CaCl2 as a composite promoter at
mass ratio 1:1 along with addition of sodium silicate on cement stabilized clay significantly
improves the strength
• Strength development is found to be higher on whole number fraction dosage of sodium
silicate when compared to decimal fraction dosage of sodium silicate for a given composite
promoter (CaCl2 & NaOH) dosage of 1%.
• It can be observed that for a particular cement content of 10% and a composite promoter
(CaCl2 & NaOH) at combination of 1:1 molar ratio, 1% sodium silicate was found to be
most effective dosage for strength development and observed that upon increasing sodium
silicate dosage bonding gel related to sodium silicate become weaker.
• Strength development of samples increases upon increasing the curing period and a sig-
nificant increase in strength is found after 7 days of curing.
REFERENCES
Bindu, J., and Ramabhadran, A. (2011). “Study on cement stabilized Kuttanad clay.” Proc., Indian Geo-
technical Conf., Indian Geotechnical Society (IGS), Kochi, India, 465–68.
Cong Maa, Zhaohui Qina, Yingchun Zhuangb, Longzhu Chena, Bing Chena. (2015). “Influence of
sodium silicate and promoters on unconfined compressive strength of Portland cement-stabilized
clay.” Soils and Foundations 2015; 55(5): 1222–232, Elsevier.
Mohammad Vali Vakili, Amin Chegenizadeh, Hamid Nikraz, Mahdi Keramatikerman. (2016). “Inves-
tigation on shear strength of stabilised clay using cement, sodium silicate and slag.” Applied Clay
Science 03768; Elsevier.
Saroglou I. Haralambos. (2013). “Compressive Strength of Soil Improved with Cement.” ascelibrary.
org on 06/06/13.
Sina Kazemian, Arun Prasad, Bujang B.K. Huat, Vahed Ghiasi, Soheil Ghareh (2012). “Effects of
Cement–Sodium Silicate System Grout on Tropical Organic Soils.” Arab J Sci Eng (2012) 37:
2137–2148.
Suganya, K., P.V. Sivapullaiah (2016). “Role of Sodium Silicate Additive in Cement-Treated Kuttanad
Soil.” Journal of Materials in Civil Engineering, ASCE, ISSN 0899-1561.
228
K. Kannan
Department of Civil Engineering, Marian Engineering College, Trivandrum, India
S. Gayathri
Department of Civil Engineering, John Cox C.S.I Memorial Institute of Technology, Trivandrum, India
P. Vinod
Department of Civil Engineering, Government Engineering College, Trichur, India
ABSTRACT: Kuttanad clay is a well known soil group, known for its low shear strength and high
compressibility, making it almost always unusable and inconstructible on in its natural state. The
traditionally used stabilization techniques such as preloading and chemical grouting are unsuit-
able in the present scenario, as they are either outpaced or affect the environment aggressively.
This paper reviews some of the sustainable methods of ground improvement which have been
used and reported in highly plastic clays. These techniques can potentially be used in Kuttanad
clay also, provided further studies are conducted. The paper also reports the effect of one tech-
nique, Microbial Induced Calcite Precipitation (MICP), on the liquid limit of Kuttanad clay.
1 INTRODUCTION
When only poor quality soil is available at the construction site, the best option is to modify
the properties of the soil so that it meets the design requirements. The process of improving
the strength and durability of soil is known as soil stabilization. It is the alteration of soils to
enhance their physical properties. Stabilization can increase the shear strength and control
the shrink-swell properties, thus improving the load bearing capacity of the soil.
Kuttanad is situated in the central half of Kerala covering an area of approximately 1100
sq.km and lies 0.6 m to 2.2 m below the mean sea level. Kuttanad clay is an important soil
group, well known for its low shear strength and high compressibility. Soil in this region is
soft black or grey marine clay composed of minerals such as montmorillonite, kaolinite, iron
oxide and aluminum oxide (Vinod and Bindu, 2010). The natural water contents of this soil
are very high and close to liquid limit, sometimes even exceeding it. The typical kuttanad soil
consists primarily of silt and clay fraction. It is a weak foundation material, with a number of
failures to structures and embankment reported. Since Kuttanad is the rice bowl of Kerala,
any ground improvement technique adopted in this region should be eco-friendly and should
never cause any harm to the environment, especially to the soil and water. Thus, the pres-
ently used methods of physical improvement such as preloading and chemical improvement
by addition of adulterants might prove to be inefficient in the present scenario, wherein the
focus is on fast sustainable technologies. All cement based techniques may seem harmless to
the public eye, but add to carbon footprint heavily during manufacture. The improvement of
subsoil using alternative biological or ecofriendly chemical methods is thus a growing con-
cern, and the focus of the present paper. In particular, the paper focuses on some alternative
sustainable techniques which can be potentially utilized to improve Kuttanad clay.
229
Constituent Percentage
2 BAGASSE ASH
Bagasse is a residue obtained from the burning of bagasse in sugar producing factories.
Bagasse is the cellular fibrous waste product after the extraction of the sugar juice from cane
mills. For each 10 tons of sugarcane crushed, a sugar factory produces nearly 3 tons of wet
bagasse which is a byproduct of the sugar cane industry (Kharade et al., 2014). When this
bagasse is burned the resultant ash is bagasse ash. Bagasse shows the presence of amorphous
silica, which is an indication of pozzolonic properties, responsible in holding the soil grains
together for better shear strength. Pozzolanic material is very rich in the oxides of silica and
alumina and sometimes calcium. Pozzolans usually require the presence of water in order for
silica to combine with calcium hydroxide to form stable calcium silicate which has cementi-
tious properties, which can then develop good bonding between soil grains in case of weak
soil. Table 1 shows the constituents of bagasse ash (Kharade et al., 2014).
3 BIOENZYME
230
Property Value/specification
dilution water depends on in-situ moisture content of soil. There are many bioenzymes
available namely renolith, permazyme, terrazyme, fujibeton etc. Most commonly used one
is terrazyme.
3.1 Terrazyme
Terrazyme modified from vegetable extracts is specially formulated to modify the engineer-
ing properties of soil, usually applied as a mixture with water. Terrzayme acts on the soil by
reducing the voids between the particles minimizing the adsorbed layer of water and maxi-
mizing compaction. It reacts with the organic materials to form cementatious material, bring-
ing about a decrease in the permeability and increase in chemical bond, creating a permanent
structure resistant to weather, wear and tear (Gupta etal., 2017). Table 2 shows the properties
of terrazyme (Shirsath et al., 2017).
An alternative to the addition of chemicals such as bagasse ash and various bioenzymes
is by the use of some biomediated stabilization technique. Microbial Induced Calcite Pre-
cipitation (MICP) is one such technique wherein some bacteria are used to catalyse chemical
reactions, resulting in precipitation of calcite within the soil pores. MICP can be induced in
soils by many methods, but are usually simulated by one of the four methods—urea hydroly-
sis, denitrification, iron reduction, and sulphate reduction. Predominance of these various
mechanisms depends on their associated reaction’s propensity to occur in the environment.
Ureolysis is predominant in manipulated soil amongst others because the reaction changes
the environmental conditions of a system (i.e. increase in pH), which inhibits other competi-
tive processes. The basic concept of urea hydrolysis involves the hydrolysis of urea to produce
carbonate, which combine with the supplied calcium substrate to precipitate calcite. A bacte-
rium with an active urease enzyme is chosen for this purpose, which when introduced along
with urea, hydrolyses it into ammonia and carbonate.
The produced carbonate ions precipitate in the presence of calcium ions as calcite crystals,
which form cementing bridges between the existing soil grains.
1 118.8 88.3
2 126 95.9
3 106.4 80
4 174.2 132.3
232
The dredged or excavated marine clays can be strengthened by an innovative method known
as bioencapsulation, which can convert the dredged clay wastes into value added construc-
tion materials. It is a process to increase strength of soft clayey soil through the formation of
strong shell around a piece of soft material by the action of urease producing bacteria (UPB).
Thus, bioencapsulation can be a more effective alternative to MICP for improving Kuttanad
clay for applications as a fill material, especially for pavement purposes.
6 CONCLUSIONS
This paper reviews some of the environmental friendly methods that can be potentially used
to stabilise Kuttanad clays. Consideration of soil as a living ecosystem offers the potential
for innovative and sustainable solutions to geotechnical problems. This is a new paradigm
for many in geotechnical engineering. Realising the potential of this paradigm requires a
multidisciplinary approach that embraces biology and geochemistry to develop techniques
for beneficial ground modification. The suggested biological methods possess the potential
as mentioned, but needs fine tuning to potentially apply it in a field scale. The laboratory
study showed a significant reduction in liquid limit, which underlines the potential of the said
techniques. The implementation of the same will thus take time. But when compared with the
presently used cement-based techniques, thought to be harmless in spite of the energy indu-
cive, carbon producing manufacturing process, these techniques go a long way into building
a sustainable and energy efficient future. Bioencapsulation is a modification further to the
biological method which could be an even better alternative. There are undoubtedly many
such processes yet to be discovered and further research is required to delineate them. Clear
however, that the biological processes influence engineering soil properties and is the future
of ground improvement, especially improvement of Kuttanad Clay.
REFERENCES
Eujine, G.N., Somervell, L.T., Chandrakaran, S., & Sankar, N. 2014. Enzyme Stabilization of High
Liquid Limit Clay. European Journal of Geotechnical Engineering 19: 6989–6995.
233
234
1 INTRODUCTION
The liquefaction may occur in fully saturated sands, silts, and low plastic clays. When the
saturated soil mass is subjected to seismic or dynamic loads, there is a sudden build-up of
pore water within a short duration. If the soil could not dissipate the excess pore pressure,
it will result in a reduction of the effective shear strength of soil mass. In this state, the soil
mass behaves like a liquid and causes large deformations, settlements, flow failures, etc. This
phenomenon is called soil liquefaction. As a result, the ability of soil deposit to support the
foundations of buildings, bridges, dams, etc. are reduced. Liquefiable soil also exerts a higher
pressure on retaining walls, which can cause them to tilt or slide. The lateral movement could
prompt settlement of the retained soil and distraction of structures constructed on various
soil deposits. A sudden build-up of pore water pressure during earthquake also triggers land-
slides and cause the collapse of dams. Liquefaction effects on damages of structures are com-
monly observed in low-lying areas near the water bodies such as rivers, lakes, and oceans.
The CSL is the boundary line which separates the liquefiable and non-liquefiable soil
states (Kramer, 1996). Vu To-Anh Phan et al. (2016) have studied the critical state line (CSL)
of sand-fine mixtures experimentally. Their result indicates that there is a unique line CSL
obtained with specific fines content for various confining pressures and different initial glo-
bal void ratios. Aminaton Marto et al. (2014) suggested that neither the fines percentages
nor other corresponding compositional characteristics are adequate to be correlated with
the critical state parameters of sand matrix soils. Sadrekarimi A and Olson S M (2009) used
ring shear tests to find the CSL of soils as the limited displacement that the triaxial device
is capable of imposing on a specimen is insufficient to reach a critical state where particle
rearrangement and potential crushing are complete. The present paper attempts to use CSL
developed from the hypoplastic model simulations to analyze the liquefaction susceptibility
of silty sand under static triaxial loading.
2 HYPOPLASTIC MODEL
It is found that hypoplastic models are more advanced to elastic-plastic models for con-
tinuum modelling of granular materials. In contrast to elastoplastic models, a decomposition
of deformation components into elastic and plastic parts, yield surface, plastic potential,
235
3 MODEL PARAMETERS
The silty sand used in this study is processed by mixing 40% quarry dust into the fine sand.
The fine sand is collected from Cherthala, Kerala and quarry dust is procured from Blue
Diamond M-sand manufacturers, Kattangal, Kerala. All the basic properties tests were
performed on the soil combinations, and the properties are listed in Table 1. A combined
dry sieve and hydrometer analysis were carried out to obtain the particle size distribution
(Figure 1).
The standard routine laboratory tests were conducted on the non-plastic silty sand to
determine the model parameters. The limited eight numbers of hypoplastic model para
meters (ϕc, hs, n, ei0, ed0, ec0, β, and α) are determined based on the detailed procedure for the
determination of model parameters explained by Herle and Gudehus (1999).
236
3.2 Minimum, maximum and critical void ratios at zero pressure state
Based on limited densities, the maximum and minimum void ratios were estimated by using
the empirical equations. Three limit void ratios at zero pressures, i.e., ei0 (during isotropic
compression at the minimum density), ec0 (critical void ratio) and ed0 (maximum density) were
estimated.
Figure 2. Particle experimental e-log p curves of silty sand in both loosest and densest states.
Parameter Value
φc 33°
edo 0.413
eco 0.890
eio 1.068
Α 0.035
hs 43 MPa
N 0.509
Β 0.5
237
The element test program has been prepared by Herle using mathematical formulations
involved in Hypoplastic soil model. The test program requires three input files namely mate-
rial parameters, initial state parameters and test conditions. Initially, the hypoplastic model
simulations were performed on oedometric compression of both the loose and dense silt soil
samples.
The overlapped curves of both the experimental as well as model simulated tests are pre-
sented to check the validity of the model. Figures 3 shows the combined overlapped e-log
p curves under oedometeric loading. It can be seen that the model simulation results well
coincide with experimental curves.
Before performing the numerical simulations on consolidated triaxial compression, the
numerical model is validated with the experimental data of CD triaxial test conducted on the
silty sand at a void ratio of 0.5 corresponding to the relative density of 85%. The overlapped
stress-strain relationships on silty sand from both the experimental and numerical model
are presented in Figure 4. It can be seen that the model simulation results well coincide with
experimental curves. Therefore, the present model study is extended to perform the triaxial
loading simulations under the drained conditions to examine the liquefaction susceptibility
of silty sand based on CSL concept.
Figure 3. e-log p curves on loose and dense silty sand under oedometeric compression loading.
Figure 4. Comparison of experimental and numerically simulated stress-strain relationship on the silty
sand at e = 0.5.
238
Figure 5. (a) Stress-strain characteristics and (b) volume change response of silty sand (at different
void ratios and σ3 = 200 kPa).
239
Figure 6. (a) Stress-strain characteristics, (b) Stress ratio characteristics and (c) Volume change
response (loose silty sand consolidated at different pressures).
240
that the normalized stress ratios are decreasing with an increase in consolidation pressures
indicate the reverse trends compared to Figure 6(a). In loose silty sands, the ultimate con-
stant deviator stress ratios are obtained at a large strain level of 20% that indicates the silty
sand behaves as contraction and stress ratio is more at low consolidation pressures.
Figure 6(c) presents the volume change response of silty sand consolidated at different
pressures. It demonstrates that the low consolidated silty sands (σc = 50 kPa) exhibiting less
contraction behaviour i.e., less volume reduction. The stress ratios are high at low consolida-
tion pressures due to less contraction behaviour. However, the more contraction behaviour
is observed at a high consolidated pressure of 400 kPa. It indicates the volume reduction is
increasing with increase in applied pressures from 50 to 400 kPa. The result concludes that
the soil behaviour changes from less contraction to highly contraction state in loose silty
sands while increasing the applied pressures. The high contraction may take place due to the
crushing of particles under the application of high pressures. The contraction soils are not
stable and susceptible to liquefaction.
In this study, CSL of silty sand is developed from drained triaxial test simulations based on
the hypoplastic model. The major findings from the study are given below:
• The effect of density on the drained response of contraction and dilation state of silty sand
depends on the applied range of consolidation pressures; similarly, the effect of consolida-
tion pressure on the response of contraction and dilation state of silty sand again depends
on denseness of soil.
• The dense silty sands experience the dilation behaviour by indicating a continuous increase
in deviator stress to higher values and decrease in pore water pressure towards negative
values. However loose silty sands experience the contraction behaviour that shows the
continuous increase in pore water pressures and reduction in deviator stress.
• A sharp peak stress was observed for loose silty sands at the low strain levels of 2–4% and
then further decreases towards residual stress levels. For dense and medium dense silty sands
the continuous increase in deviator stress was observed up to the failure strain level of 25%.
241
REFERENCES
Aminaton Marto, Choy Soon Tan, Ahmad Mahir Makhtar and Tiong Kung Leong. 2014. Critical State
of Sand Matrix Soils. The Scientific World Journal, Hindawi Publishing Corporation.
Atkinson, John, H., and Bransby, P.L. 1978. The Mechanics of Soils: An Introduction to Critical State
Soil Mechanics, McGraw-Hill.
Gudehus, G. 1996. A comprehensive constitutive equation for granular materials. Soils and Foundations,
36(1), 1–12.
Herle, I., and G. Gudehus. 1999. Determination of parameters of a hypoplastic constitutive model from
properties of grain assemblies. Mechanics of Cohesive-Frictional Materials, 4, 461–486.
Kolymbas, D. 1985. A generalized hypoelastic constitutive law. In Proceedings of the eleventh Interna-
tional Conference on Soil Mechanics and Foundation Engineering.
Sadrekarimi, A., and Olson, S.M. 2009. Defining the critical state line from triaxial compression and
ring shear tests. In Proceedings of the 17th International Conference on Soil Mechanics and Geotechni-
cal Engineering: The Academia and Practice of Geotechnical Engineering, 1, 36–39.
Steven, L., Kramer. 1996. Geotechnical Earthquake Engineering. Prentice Hall, New Jercy.
Vu To-Anh Phana, Darn-Horng Hsiaob and Phuong ThucLan Nguyenc. 2016. Critical State Line and
State Parameter of Sand-Fines Mixtures, Sustainable Development of Civil, Procedia Engineering—
Urban and Transportation Engineering Conference, 142, 299–306.
242
K. Balan
Rajadhani Institute of Engineering and Technology, Thiruvananthapuram, India
S. Aswathy Nair, L.K. Vaishnavi, Megha S. Thampi, D.R. Renju & Chithra Lekshmi
LBS Institute of Technology for Women, Thiruvananthapuram, India
ABSTRACT: This paper presents a case study of the investigation of recurring breaches of
the embankment at Puthenarayiram Padasekharam, Kuttanad and the design of its reconstruc-
tion. At a location called Kundarikund in D Block of Kuttanad, a section of embankment
frequently collapses, inundating the cultivable land. A hydrographic survey was conducted to
determine the bed profile and the velocity of water was measured using a current meter. The
presence of two depressions on the paddy field side of the embankment where the breach occurs
is detected. Laboratory tests on soil samples obtained from boreholes indicated a very high void
ratio and the water content was much higher than the liquid limit. Hence, the insitu undrained
shear strength of the soil was determined by conducting a field vane shear test. The global and
internal stability were checked as per standard geotechnical practices. Only very less passive
resistance could be generated in the weak clay. To enhance the passive resistance, a berm made of
Geobags filled with soil was provided on the paddy field side of the embankment. Contiguous/
secant piles were provided on the right face of the embankment throughout the breached por-
tion. At the canal side, soldier piles are provided at a spacing of two meters connected together
by RCC (Reinforced Cement Concrete) precast slabs. The secant piles on the right side are con-
nected to the soldier piles on the left side by transverse RCC beams at a spacing of two meters.
Keywords: Kuttanad, secant piles, field vane shear test, soldier piles, Geobags, berm
1 INTRODUCTION
Kuttanad is a region covering the Alappuzha and Kottayam Districts, in the state of Kerala,
well known for its vast paddy fields and geographical peculiarities. The region has the lowest
altitude in India, and is one of the few places in the world where farming is carried around 1.2
to 3.0 meters below sea level. Kuttanad is historically important and is a major rice producer
in the state. Farmers of Kuttanad are famous for bio saline farming. Kuttanad clay is a soft
soil with associated problems of low shear strength and compressibility (Vinayachandran
et al., 2013). The soil has a unique combination of minerals such as metahalloysite, kaolinite,
iron oxides and aluminum oxides. The diatom frustules present in the soil indicate biological
activity during the sediment formation and this also accounts for the nature of organic mat-
ter predominantly present in the soil, which is mostly derived from planktonic organisms.
A considerable amount of organic matter is present in the soil and the magnitude measured
accounts for about 14% by mass (Suganya & Sivapullaiah, 2015, 2017). Kuttanad soil is
expansive clay having a high void ratio and low density.
The cultivable land of Kuttanad is 1.2 to 3 meters below mean sea level. These lands are
kept submerged for about six to eight months of the year; during which a lot of organic
243
2 SITE INSPECTION
The site was inspected on 31-05-2017 by the authors along with the Engineers from Irriga-
tion Department. From the site inspection, it was observed that the Pile-Slab system and the
coconut piles are tilted towards the paddy area. Water was flowing from the paddy area to
Kochar river. The surface of the water body in the paddy area is very calm and hence possi-
bility of under currents cannot be ruled out. From the observation it is presumed that surface
of soil profile in the paddy area is much deeper than that shown in the drawings provided.
One of the possible solutions which can be practically implemented cost effectively is the
pile-slab system supported by inclined piles. The inside portion of the pile-slab system should
be filled with clay reinforced with woven coir geotextiles (700 gsm).
To assess whether this solution is practically possible or not, the depth of soil in the paddy
area from water surface needs to be determined; for which a hydrographic survey was carried
out.
It is ideal to also know the water velocity as turbulent flow may occur at the site. In order
to design the inclined pile, the undrained shear strength of the soil is also required. This was
obtained by conducting a field vane shear test on the natural soil below the existing bund.
3 HYDROGRAPHIC SURVEY
A hydrographic survey was conducted to determine the bed profile and the velocity of water was
measured using a current meter. The contour map of the bed obtained from the hydrographic
survey is presented in Figure 1. The results of the hydrographic survey revealed the presence of
Figure 1. Contour map of bed surface obtained from the hydrographic survey.
244
4 SOIL INVESTIGATION
5 DESIGN OF EMBANKMENT
Based on the observations from the hydrographic survey, field vane shear test and the soil
properties, the following recommendations are made.
Properties Clay
Class MH-OH
Specific gravity 2.4
Natural moisture content (%) 116
Bulk density (g/cc) 1.35
Void ratio 2.36
Porosity (%) 70
Liquid limit (%) 71
Plastic limit (%) 38
Plasticity index (%) 32
Cohesion (kPa) 0.88
% Clay & silt 94.3
% Sand 5.6
245
Padasekharam side
The major stabilizing force is the passive pressure from the soil on the Padasekharam side. To
utilize the passive resistance of the soil, there must be a continuous wall underneath the bed
surface for a suitable depth. Hence, it is recommended to install driven precast RCC contigu-
ous piles with a cross section of 40 cm × 40 cm and length of 12 m. These contiguous piles
must be installed for a length of 50 m including the breached portion of the bund. The top of
the contiguous piles must be joined together by providing an RCC beam of 60 × 40 cm. The
beam must be connected to the RCC top beam at the river side by transverse beams at 2 m
intervals. To increase the passive resistance from the paddy side, a berm, with geobags filled
with locally available clayey soil, must be constructed.
Geobags
The geobags must have a width of 1.2 m, length of 1.2 m and height of 1 m. There should
be four lifting points with two straps on each lifting point. The tensile strength of each strap
should at least be 15 kN. The seam strength should be 25 kN/m. The fabric of the geobag
should be woven polypropylene with a density of 325 gsm. The fabric should have a wide
width tensile strength of 55 kN/m in both machine and cross machine directions. The CBR
puncture strength should be 6 kN.
Calculations
The in situ undrained shear strength of the soil is very low. If the passive resistance of clay
alone is considered, the length of the piles will be very large and will not be feasible. Hence, it
is required to enhance the passive resistance by providing a berm. The thickness of the berm
proposed is 3 m. The top width of the berm was determined based on the width required for
the formation of a full passive wedge. The depth of embedment ‘D’ below the bottom level of
berm is calculated by taking moments of all the forces about top. The details of the various
forces acting on the wall are shown in the pressure diagram (Figure 3).
246
φ = 1°, c = 8.83 kPa
ka = 0.966
kp = 1.04
Unit weight of soil, γ = 13 kN/m3
Saturated unit weight, γsat = 14.3 kN/m3
Submerged unit weight, γ’ = 4.5 kN/m3
Taking moments about top of the pile and simplifying we get
Solving, D = 5 m
The total depth works out to be 10.75 m. However a total length of 12 m from the top of
the bund may be provided for the pile including factor of safety.
The top width of the berm was determined based on the width required for the formation
of a full passive wedge. The minimum top width comes to 5.20 m. However the top width
provided is 7 m, considering the factor of safety. The bottom width was determined based on
the angle of repose from the top outer edge.
The time schedule to be followed for filling of soil inside the bund is
Stage-1: First one meter filling inside the bund
Stage-2: Second one meter after two months from the completion of first layer
Stage-3: Third one meter after two months from the completion of second layer
Stage-4: Top (fourth) layer after six months from the completion of third layer
The berm must also be constructed simultaneously during the filling of the bottom layers
inside the bund.
The soil for filling the bund/berm must be taken from sites at least 150 m away from the
bund.
The soil fill inside the bund must be reinforced with woven coir geotextile of 700 gsm with
12.5 mm opening size. The vertical spacing of reinforcement must be 1 m. The cross section
of the embankment is presented in Figure 4.
5.2 R
econstruction of the bund for a length of 35 m on either sides of collapsed portion
(length 70 m)
Provide RCC precast soldier piles with a cross section of 30 × 30 cm and length 7.8 m at a
spacing of 2 m, connected by RCC precast slabs. The top of the soldier piles must be con-
nected together by an RCC beam with a cross section of 40 × 40 cm.
Berms (Pilla Bund) with a top width of 2 m (similar to the collapsed portion) must be
constructed by driving a row of coconut piles and filling soil on both sides of the bund as
detailed in Figure 5.
Woven multifilament polypropylene geotextile must be provided at the external surface as
shown in Figure 4. The specifications of this geotextile are as follows:
247
Figure 5. Cross section of proposed embankment on either sides of breached portion for a length of
35 m.
6 CONCLUSIONS
The reasons for the recurring breaches of the embankment at Puthenarayiram Padasekharam,
Kuttanad have been investigated. The findings are:
• The presence of two depressions of about 5 m in depth on the paddy field side of the
embankment were hampering the global stability
• The pile-slab construction method practiced by the Irrigation Department could not har-
ness the passive resistance of the soil below the embankment
• A revised design of embankment has been proposed with secant piles and a berm on the
paddy field side.
REFERENCES
Suganya, K. & Sivapullaiah. P.V. Effect of changing water content on the properties of Kuttanad soil,
Geotechnical and Geological Engineering 33, 913–921.
Suganya, K., & Sivapullaiah. P.V. Role of composition and fabric of Kuttanad clay: a geotechnical per-
spective, Bulletin of Engineering Geology and the Environment 76, 371–381.
Vinayachandran. N., Narayana. A.C., Najeeb. K.M., & Narendra. P. (2013). Disposition of aquifer
system, geo-electric characteristics and gamma-log anomaly in the Kuttanad alluvium of Kerala,
Journal of the Geological Society of India 81, 183–191.
248
Aleena Mariam Saji, Alen Ann Thomas, Greema Sunny, J. Jayamohan & V.R. Suresh
LBS Institute of Technology for Women, Thiruvananthapuram, India
ABSTRACT: Geosynthetics demonstrate their beneficial effects only after considerable settle-
ment, since the strains occurring during initial settlements are insufficient to mobilize significant
tensile load in the geosynthetic. This is not a desirable feature since for foundations of certain
structures, the values of permissible settlements are low. Anchoring the reinforcement is a prom-
ising technique yet to be comprehensively studied. This paper presents the results of a series of
finite element analyses carried out to investigate the improvement in load-settlement behaviour
of a strip footing resting on a Reinforced Foundation Bed due to anchoring the Geosynthetic
Reinforcement. It is observed that the bearing capacity can be considerably increased without the
occurrence of excessive settlement by anchoring the geosynthetic reinforcement with micropiles.
1 INTRODUCTION
The decreasing availability of proper construction sites has led to the increased use of mar-
ginal ones, where the bearing capacity of the underlying deposits is very low. By the applica-
tion of geosynthetics it is possible to use shallow foundations even in marginal soils instead
of expensive deep foundations. This is done by either reinforcing cohesive soil directly or
replacing the poor soils with stronger granular fill in combination with geosynthetic rein-
forcement. In low-lying areas with poor foundation soils, the geosynthetic reinforced foun-
dation bed can be placed over the weak soil. The resulting composite ground (reinforced
foundation bed) will improve the load carrying capacity of the footing and will distribute
the stresses on a wider area on the underlying weak soils, hence reducing settlements. Dur-
ing the past 30 years, the use of reinforced soils to support shallow foundations has received
considerable attention. Many experimental and analytical studies have been performed to
investigate the behaviour of reinforced foundation beds for different soil types (eg. Binqet
and Lee (1975), Shivashankar et al. (1993)). Several experimental and analytical studies were
conducted to evaluate the bearing capacity of footings on reinforced soil (eg. Shivashankar
and Setty (2000); Shivashankar and Reddy (1998); Madhavilatha and Somwanshi (2009);
Alamshahi and Hataf (2009); Vinod et al. (2009) Arun et al. (2008) etc).
It is now known that geosynthetics demonstrate their beneficial effects only after con-
siderable settlements, since the strains occurring during initial settlements are insufficient
to mobilize significant tensile load in the geosynthetic. This is not a desirable feature since
for foundations of certain structures; the values of permissible settlements are low. Thus
there is a need for a technique which will allow the geosynthetic to increase the load bearing
capacity of soil without the occurrence of large settlements. Lovisa et al 2010 conducted
laboratory model studies and finite element analyses on a circular footing resting on sand
reinforced with prestressed geotextile. It was found that the addition of prestress to reinforce-
ment resulted in a significant improvement in the load bearing capacity and reduction in set-
tlement of foundation. Lackner et al. (2013) conducted about 60 path controlled static load
249
Finite element analyses are carried out using the commercially available finite element soft-
ware PLAXIS 2D. For simulating the behaviour of soil, different constitutive models are
available in the FE software. In the present study Mohr-Coulomb model is used to simu-
late soil behaviour. This non-linear model is based on the basic soil parameters that can be
obtained from direct shear tests; internal friction angle and cohesion intercept. Since strip
footing is used, a plain strain model is adopted in the analysis. The settlement of the rigid
footing is simulated using non zero prescribed displacements.
The displacement of the bottom boundary is restricted in all directions, while at the verti-
cal sides; displacement is restricted only in the horizontal direction. The initial geostatic stress
250
251
behaviour. It is seen from Figures 3 and 4 that as the stiffness of anchor increases, the improve-
ment in load-settlement behaviour increases.
4 CONCLUSIONS
Based on the results of Finite Element Analyses carried out the following conclusions are
drawn.
• Anchoring the Geosynthetic Reinforcement considerably improves the load-settlement
behaviour
• The improvement factor increases with stiffness of the anchor
REFERENCES
Alamshahi, S. and Hataf, N. (2009). Bearing capacity of strip footings on sand slopes reinforced with
geogrid and grid-anchor, Geotextiles and Geomembranes, 27 (2009) 217–226.
Arun Kumar Bhat, K., Shivashankar, R. and Yaji, R.K. (2008), “Case study of land slide in NH 13
at Kethikal near Man galore, India”, 6th International Conference on Case histories in Geotechnical
Engineering, Arlington, VA, USA, paper no. 2.69.
Binquet, J. and Lee, K.L. (1975). Bearing capacity tests on reinforced earth slabs. Journal of Geotechni-
cal Engineering Division, ASCE 101 (12), 1241–1255.
Lackner, C., Bergado, D.T. and Semprich, S. (2013). Prestressed reinforced soil by geosynthetics—
concept and experimental investigations, Geotextiles and Geomembranes, 37, 109–123.
Lovisa, J., Shukla, S.K. and Sivakugan, N. (2010). Behaviour of prestressed geotextile-reinforced sand
bed supporting a loaded circular footing, Geotextiles and Geomembranes, 28 (2010) 23–32.
Madhavilatha, G. and Somwanshi, A. (2009). Bearing capacity of square footings on geosynthetic rein-
forced sand, Geotex tiles and Geomembranes, 27 (2009) 281–294.
Shivashankar, R., Madhav, M.R. and Miura, N. (1993). Rein forced granular beds overlying soft clay,
Proceedings of 11th South East Asian Geotechnical Conference, Singapore, 409–414.
Shivashankar, R. and Reddy, A.C.S. (1998). Reinforced granular bed on poor filled up shedi ground,
Proceedings of the Indian Geotechnical Conference – 1998, Vol. 1, 301–304.
Shivashankar, R. and Setty, K.R.N.S. (2000). “Foundation problems for ground level storage tanks in
and around Mangalore”, Proceedings of Indian Geotechnical Conference 2000, IIT Bombay, Mumbai.
Unnikrishnan and Aparna Sai. (2014), “Footings in Clay soil supported on Encapsulated Granular
Trench”, Proceedings of Indian Geotechnical Conference 2014, Kakinada.
Vinod, P., Bhaskar, A.B. and Sreehari, S. (2009). Behaviour of a square model footing on loose sand
reinforced with braided coir rope, Geotextiles and Geomembranes, 27 (2009) 464–474.
252
ABSTRACT: Geosynthetic materials like geotextiles, geogrids and geocells have gained
widespread acceptance over recent years due to their superior engineering characteristics
and quality control. The rising cost and the environmental concerns created by these syn-
thetic reinforcement materials makes it necessary to explore alternate resources for soil rein-
forcement. Coir is an eco-friendly, biodegradable, organic material which has high strength,
stiffness and durability characteristics compared to other natural reinforcement materials.
This paper deals with a systematic series of plate load tests on unreinforced sand and sand
reinforced with coir geotextiles. A significant enhancement in strength and stiffness charac-
teristics was obtained with the provision of coir reinforcement. Based on the test results, an
Artificial Neural Network (ANN) model has also been established for predicting the strength
of sand beds when these reinforcement elements are applied practically. The predicted values
from the model and those obtained from the experimental study are found to have a good
correlation.
Keywords: Coir geotextile, artificial neural network, bearing pressure, subgrade stabilization
1 INTRODUCTION
In recent years, geosynthetics have become increasingly popular for their use as a reinforce-
ment in earth structures. The use of geosynthetics in reinforcing sand beds has been stud-
ied by various researchers (Abu-Farsakh et al., 2013; Guido et al., 1986; Akinmusuru &
Akinbolade, 1981; Omar et al., 1993; Ghosh et al., 2005; Latha & Somwanshi, 2009; Sharma
et al., 2009). Synthetic fibers have a longer life and do not generally undergo biological deg-
radation, thus minimizing environmental concern. Coir geotextiles can be considered as an
efficient replacement for their synthetic counterparts due to their economy and excellent
engineering properties. The use of coir as a reinforcement material has been studied by vari-
ous researchers (Lekha, 1997; SivakumarBabu et al., 2008; Vinod et al., 2009; Subaida et al.,
2008). India is one of the leading coir producing countries. While the world focus is shifting
to natural geotextiles, India as a producer of coir geotextiles, has much to gain by using it for
meeting domestic as well as global demands. Natural geotextiles are becoming increasingly
popular in various geotechnical applications like construction of embankments, subgrade
stabilization, slope protection work, weak soil improvement and so on. From the studies
reported so far, it is perceived that the potential of coir products as a reinforcement material
is under-utilized.
Empirical models based on Artificial Neural Networks (ANN) have been widely used
for numerous applications in geotechnical engineering. Neural networks have proved to be
an efficient tool to predict the behavior of soils under different test conditions, especially,
since the relationship between the input and output variables is complex. Although models
253
An Artificial Neural Network (ANN) is a form of artificial intelligence, which tries to simu-
late the behavior of the human brain and nervous system. In recent years, many researchers
have investigated the use of artificial neural networks in geotechnical engineering applica-
tions and have obtained reassuring results (e.g. Kung et al., 2007; Kuo et al., 2009; Ornek
et al., 2012; Harikumar et al., 2016). MATLAB software was used for formulating the ANN
model. The parameters used for creating the model are listed in Table 1.
254
The sand used for the tests had a specific gravity of 2.65, effective size 0.32, coefficient of
uniformity 2.56, coefficient of curvature 0.88 and angle of friction of 38.5° (at 60% relative
density). Classification according to the Unified Soil Classification System (USCS) is SP
(poorly graded sand). All tests were done at a relative density of 60% to simulate medium
dense condition. Figure 1 shows a photograph of the coir geotextiles used for the study. The
properties of woven coir geotextiles were determined as per Indian standards (IS: 13162,
1992; IS: 14716, 1999).
The model test was conducted on a steel tank 750 mm × 750 mm × 750 mm. The model
footing was a square steel plate 150 mm × 150 mm, with a thickness of 25 mm. A hand oper-
ated hydraulic jack was used for loading the footing and a pressure gage of 100 kN was fitted
to measure the load applied. Figure 2 shows a photograph of the test set up. The objectives
of the tests were to study the influence of coir geotextiles on improving the overall perform-
ance of the sand foundations. The test series included varying the depth of the reinforcement
layer from the top of footing (u). An artificial neural network model has also been established
based on the test results. MATLAB software was used to formulate the ANN model. The
technique of formulating empirical models is reliable and relatively easy compared with a
numerical study.
Figure 3 shows the variation of measured values with the values predicted by the model.
From the figure, it can be seen that the bearing capacity predicted by the model and those
obtained from the experimental study are in good agreement.
It can be further seen that the provision of coir reinforcement enhances the strength char-
acteristics of reinforced soil (an almost 95% increase in strength can be observed, even with a
single layer of reinforcement). Additionally, maximum improvement was observed when the
reinforcement was provided at a depth of 0.25 times the width of the foundation (0.25 B).
Furthermore, the predicted and observed values were found to have a good correlation (see
Figure 4), thus establishing the validity of the proposed model.
255
5 CONCLUSIONS
A detailed, comprehensive study on the performance of coir geotextiles was conducted using
a plate load testing apparatus. The results demonstrated that the coir reinforcement inclu-
256
REFERENCES
Abu-Farsakh, M., Chen, Q. & Sharma, R. (2013). An experimental evaluation of the behavior of foot-
ings on geosynthetic-reinforced sand. Soils and Foundations, 53(2), 335–348.
Akinmusuru, J.O. & Akinbolade, J.A. (1981). Stability of loaded footings on reinforced soil. Journal of
Geotechnical Engineering Division, ASCE, 107, 819–827.
Ghosh, A., Ghosh, A. & Bera, A.K. (2005). Bearing capacity of square footing on pond ash reinforced
with jute-geotextile. Geotextiles and Geomembranes, 23(2), 144–173.
Guido, V.A., Chang, D.K. & Sweeney, M.A. (1986). Comparison of geogrid and geotextile reinforced
earth slabs. Canadian Geotechnical Journal, 23, 435–440.
Harikumar, M., Sankar, N. & Chandrakaran, S. (2016). Behavior of model footing on sand bed rein-
forced with multi-directional reinforcing elements. Geotextiles and Geomembranes, 44, 568–578.
IS: 13162. 1992. Geotextiles—methods of test for determination of thickness at specified pressure. New
Delhi: Bureau of Indian Standards.
IS: 14716. 1999. Geotextiles—determination of mass per unit area. New Delhi: Bureau of Indian
Standards.
Kung, G.T.C., Hsiao, E.C.L., Schuster, M. & Juang, C.H. (2007). A neural network approach to esti-
mating deflection of diaphragm walls caused by excavation in clays. Computers and Geotechnics,
34(5), 385–396.
Kuo, Y.L., Jaksa, M.B., Lyamin, A.V. & Kaggwa, W.S. (2009). ANN-based model for predicting the
bearing capacity of strip footing on multi-layered cohesive soil. Computers and Geotechnics, 36(3),
503–516.
Latha, G.M. & Somwanshi, A. (2009). Bearing capacity of square footings on geosynthetic reinforced
sand. Geotextiles and Geomembranes, 27( 4), 281–294.
Lekha, K.R. (1997). Coir geotextiles for erosion control along degraded hill slopes. Proceedings of Semi-
nar on Coir Geotextiles, Coimbatore, India.
Omar, M.T., Das, B.M., Puri, V.K. & Yen, S.C. (1993). Ultimate bearing capacity of shallow founda-
tions on sand with geogrid reinforcement. Canadian Geotechnical Journal, 30, 545–549.
Ornek, M., Laman, M., Demir, A. & Yildiz, A. (2012). Prediction of bearing capacity of circular foot-
ings on soft clay stabilized with granular soil. Soils and Foundations, 52(1), 69–80.
Sharma, R., Chen, Q., Abu-Farsakh, M. & Yoon, S. (2009). Analytical modelling of geogrid reinforced
soil foundation. Geotextiles and Geomembranes, 27, 63–72.
SivakumarBabu, G.L., Vasudevan, A.K. & Sayida, M.K. (2008). Use of coir fibres for improving the
engineering properties of expansive soils. Journal of Natural Fibers, 5(1), 1–15.
Subaida, E.A., Chandrakaran, S. & Sankar, N. (2008). Experimental investigations on tensile and pull-
out behaviour of woven coir geotextiles. Geotextiles and Geomembranes, 26(5), 384–392.
Vinod, P., Ajitha, B. & Sreehari, S. (2009). Behavior of a square model footing on loose sand reinforced
with braided coir rope. Geotextiles and Geomembranes, 27, 464–474.
257
Jayamohan Jayaraj
LBS Institute of Technology for Women, Trivandrum, Kerala, India
S.R. Soorya
Department of Civil Engineering, Marian Engineering College, Trivandrum, Kerala, India
ABSTRACT: Due to limited space available for the construction of structures and support
of heavy loads, foundations are often placed close to each other; the footings interact with each
other and their behavior is thus not dissimilar to that of a single isolated footing. This study
aims to determine experimentally the effect of interference of closely spaced shallow footings
(strip footings), resting on granular beds overlying a ‘weak soil’. The laboratory model tests
were carried out at different ‘center to center’ spacing between the footings. The ultimate bear-
ing capacity of footings increased up to a certain critical spacing and thereafter decreased. The
bearing capacity of interfering footings improved due to the provision of a granular bed.
Keywords: laboratory model tests, interference effect, critical spacing, bearing capacity,
granular bed
1 INTRODUCTION
In urban areas, due to the limited space available, foundations often are placed close to each other
resulting in an interference with each other. The interference of the failure zones of the footings
alters the bearing capacity and load-settlement characteristics of the closely spaced footings.
Stuart (1962) was the first pioneer who investigated exclusively the effect of interference of
closely spaced strip footings on ultimate bearing capacity. Using the limit equilibrium technique,
he indicated that the interference of two footings on sand leads to an increase in their ultimate
bearing capacity. Also, he demonstrated that there existed a certain critical spacing between
two footings for which the ultimate bearing capacity becomes maximum. The behavior of the
interference effect is attributed to the phenomenon called the ‘blocking effect’ or ‘arching effect’.
According to this phenomenon, the soil between the two footings forms an inverted arch, and the
combined system of soil and two footings moves down upon loading as a single unit. Since the
area of this single unit is greater than that of the sum of the areas of two footings, it results in
greater bearing capacity. On the other hand, Stuart stated that the interference effect of adjacent
footings on clay would act differently to that on sand, and concluded that the interference effect
would not exhibit any change in bearing capacity as the spacing between the footings decreased.
Selvadurai and Rabbaa (1983) studied the contact stress distribution beneath two inter-
fering rigid strip footings of equal width, resting in frictionless contact with a layer of
dense sand underlaid by a smooth, rigid base. The study showed that the contact stress
distribution for a single isolated foundation has a symmetrical shape and as the spacing
between the adjacent footings decreases, the contact stress distribution exhibits an asym-
metrical shape. Das and Larbi-Cherif (1984) conducted laboratory model tests on two
259
2.1 Materials
For the laboratory tests, well-graded medium sand was used for the granular bed, and locally
available clay was used for a ‘weak soil’. The properties of soils are presented in Table 1 and
Table 2.
260
261
of Width (B) was expressed as a Spacing to Footing Width Ratio (S/B). The experimental
program was as shown in Table 3.
262
4 CONCLUSIONS
In the present study, a series of laboratory scaled model tests were performed to determine
the load-settlement behavior of interfering strip footings resting on a granular bed overlying
a weak soil. The following conclusions are drawn:
• Interference of adjacent footings significantly affects the load-settlement behavior. There is an
optimum distance between the footings at which the bearing capacity has the maximum value
due to a blocking effect. In footings resting on clay, the blocking effect appears at S/B = 3, and
for footings resting on the granular bed, the blocking effect appears at S/B = 2.5. However, the
critical value of S/B at which IF becomes the maximum is not a fixed.
• The trend obtained from the laboratory study is similar to the trend discussed in the litera-
ture review on the interference effect of two footings on sand.
• The bearing capacity of closely spaced strip footings improved with the provision of gran-
ular bed over a weak soil.
REFERENCES
Das, B.M. & Larbi-Cherif, S. (1984). Bearing capacity of two closely-spaced shallow foundations on sand.
Desai, M.V.G. & Moogi, V.V. (2016). Study of interference of strip footing using PLAXIS-2D. Interna-
tional Advanced Research Journal in Science, Engineering and Technology, 3(9), 13–17.
Ghosh, P. & Sharma, A. (2010). Interference effect of two nearby strip footings on layered soil: Theory
of elasticity approach. Acta Geotechnica, 5, 189–198.
Graham, J., Raymond, C.P. & Suppiah, A. (1984). Bearing capacity of three closely-spaced footings on
sand. Geotechnique, 34(2), 173–182.
Kouzer, K.M. & Kumar, Jyant. (2008). Ultimate bearing capacity of equally spaced multiple strip
footings on cohesionless soils without surcharge. International Journal for Numerical and Analytical
Methods in Geomechanics, 32, 1417–1426.
Kumar, J. & Bhoi, M.K. (2009). Interference of two closely spaced strip footings on sand. Journal of
Geotechnical and Geoenvironmental Engineering, 135(4), 595–604.
Pusadkar, S.S., Gupta, R.R. & Soni, K.K. (2013). Influence of interference of symmetrical footings on
bearing capacity of soil. International Journal of Engineering Inventions, 2(3), 63–67.
Reddy, S.E., Borzooei, S. & Reddy, N.G.V. (2012). Interference between adjacent footings on sand.
International Journal of Advanced Engineering Research and Studies, 1(4), 95–98.
Selvadurai, A.P.S. & Rabbaa, S.A.A. (1983). Some experimental studies concerning the contact stresses
beneath interfering rigid strip foundations resting on a granular stratum. Canadian Geotechenical-
Journal, 20, 406–415.
Stuart, J.G. (1962). Interference between foundations, with special reference to surface footings in sand.
Geotechnique, 2(1), 15–22.
264
B. Anusha Nair, Akhila Vijayan, S. Chandni, Shilpa Vijayan, J. Jayamohan & P. Sajith
LBS Institute of Technology for Women, Thiruvananthapuram, India
ABSTRACT: In general the shape of cross section of footings provided for structures is rec-
tangular. There are well accepted theories to determine the bearing capacity and settlements
of footing with a flat base. By altering the cross sectional shape of the footing, better confine-
ment of underlying soil can be attained thereby improving bearing capacity and reducing set-
tlements. In this investigation a series of finite element analyses are carried out to determine
the influence of shape of cross section of the footing on the load-settlement behaviour of strip
footings. It is observed that by altering the shape of cross section of footings, better confine-
ment of underlying soil can be achieved, thereby improving the load-settlement behaviour.
1 INTRODUCTION
The foundation transmits the load of the structure safely to the ground, without under-
going any shear failure or excessive settlement. The bearing capacity of footings has been
extensively studied, both theoretically and experimentally, over the past many decades. The
theoretical approach was initiated by Prandtl (1921) and Reissner (1924), and the design-
oriented bearing capacity equation (fully considering the soil unit weight, cohesion, and
friction angle) was proposed by Terzaghi (1943). After the early development of the bear-
ing capacity solution, most efforts have focused mainly on a more realistic derivation of
265
Finite element analyses are carried out using the commercially available finite element soft-
ware PLAXIS 2D. For simulating the behaviour of soil, different constitutive models are
available in the FE software. In the present study Mohr-Coulomb model is used to simu-
late soil behaviour. This non-linear model is based on the basic soil parameters that can be
Property Clay
266
Vertical Stress vs Settlement curves for Rectangular Cross Section with flanges; obtained
from finite element analyses are presented in Figure 3. It is seen that the presence of flanges
below the edges of the strip footing improves the load-settlement behaviour. The improve-
ment for d/B values up to 0.6 is less whereas for values of d/B > 0.6, improvement is more.
The load-settlement behaviour of footing with sloping cross section is presented in
Figure 4. The optimum improvement is observed when d/B = 0.4. For higher values of d/B,
Figure 3. Vertical stress vs settlement curves for rectangular cross section with flanges.
267
the improvement reduces. This behaviour is quite contrast to that of Rectangular Cross Sec-
tion with Flanges.
To quantify the improvement in load-settlement behaviour attained due to anchoring
the reinforcement, an Improvement Factor (If) is defined as the ratio of stress with Various
Shapes of Cross Section to that with Rectangular Cross Section; at 0.5 mm settlement.
It is seen that for rectangular cross section with flanges, the improvement factor initially
increases and then reduces when d/B = 0.6. However for further higher values of d/B, the
improvement factor increases.
The optimum value of d/B for footing with sloping cross section is observed to be 0.4. For
higher values of d/B, the improvement factor is reducing.
4 CONCLUSIONS
From the finite element analyses carried out, the following conclusions are drawn.
• The load-settlement behaviour can be improved by altering the shape of cross section of
footings
• The improvement factor depends of the geometrical parameters of the cross section
268
Craig, R.F. (2004). Craig’s soil mechanics, Taylor & Francis, New York.
Ericson Hans L, Drescher Anderw. Bearing capacity of circular footings. J Geotech Geoenviron Eng,
ASCE 2002; 128(1):38–43.
Griffiths D.V., Fenton Gordon A., Manoharan N. Bearing capacity of rough rigid strip footing on
cohesive soil: probabilistic study. J Geotech Geoenviron Eng, ASCE 2002; 128(9):743–50.
Hossain, M.S., and Randolph, M.F. (2009). “New mechanism-based design approach for spudcan foun-
dations on single layer clay.” J. Geotech. Geoenviron. Eng., 10.1061/(ASCE)GT.1943-5606.0000054,
1264–1274.
Hossain, M.S., and Randolph, M.F. (2010). “Deep-penetrating spudcan foundations on layered clays:
centrifuge tests.” Géotechnique, 60(3), 157–170.
Hu, Y., Randolph, M.F., and Watson, P.G. (1999). “Bearing response of skirted foundation on non
homogeneous soil.” J. Geotech. Geoenviron. Eng., 10.1061/(ASCE)1090-0241 (1999), 125:11(924),
924–935.
Lee J. and Salgado R. Estimation of bearing capacity of circular footing on sands based on CPT. J
Geotech Geoenviron Eng, ASCE 2005; 131(4):442–52.
Merifield, R.S., Sloan, S.W., and Yu, H.S. (2001). “Stability of plate anchors in undrained clay.”
Géotechnique, 51(2), 141–153.
Meyerhof G. Shallow foundations. J Soil Mech Found Div, ASCE 1965; 91(SM2):21–31.
Michalowski Randoslaw L. and Dawson M.E. Three-dimensional analysis of limit loads on Mohr–
Coulomb soil. Found Civil Environ Eng 2002; 1(1):137–47.
Ming Zhu and Radoslaw L. Michalowski, (2005), “Shape Factors for Limit Loads on Square and
Rectangular Footings”, Journal of Geotechnical and Geoenvironmental Engineering, Vol. 131, No. 2,
ASCE, 223–231.
Prandtl L. Über die Eindringungsfestigkeit (Härte) plastischer Baustoffe und die Festigkeit von Schnei-
den. Zeit Angew Math Mech 1921(1):15–20.
Reissner H. Zum Erddruckproblem. In: Proceedings of first international congress of applied mechan-
ics. Delft; 1924. p. 295–311.
Salgado R., Lyamin A.V., Sloan S.W., Yu H.S. Two-and three-dimensional bearing capacity of founda-
tion in clay. Geotechnique 2004; 54(5):297–306.
Singh, S.K., and Monika, K., (2016), “Load carrying capacity of shell foundations on treated and
untreated soils” Indian Geotechnical Conference 2016, IIT Madras Taiebat HA, Carter JP. Numeri-
cal studies of the bearing capacity of shallow foundation. Geotechnique 2000; 50(4):409–18.
Terzaghi K. Theoretical soil mechanics. New York: Wiley; 1943.
Wang, C.X., and Carter, J.P. (2002). “Deep penetration of strip and circular footings into layered clays.”
Int. J. Geomech.,10.1061/(ASCE)1532-641, (2002), 2:2(205), 205–232.
Zhang, Y., Bienen, B., Cassidy, M.J., and Gourvenec, S. (2012). “Undrained bearing capacity of deeply
buried flat circular footings under general loading.” J. Geotech. Geoenviron.Eng., 10.1061/(ASCE)
GT.1943-5606.0000606, 385–397.
Zhang, Y., Wang, D., Cassidy, M.J., and Bienen, B. (2014). “Effect of installation on the bearing capac-
ity of a spudcan under combined loading in soft clay.” J. Geotech.Geoenviron. Eng., 10.1061/(ASCE)
GT.1943-5606.0001126, 04014029.
269
K.S. Beena
Department of Civil Engineering, Cochin University of Science and Technology, Kochi, Kerala, India
ABSTRACT: P-wave velocity and shear wave velocity are the dynamic parameters used to
determine soil characteristics. Ultrasonic pulse velocity tests are usually used for p-wave velocity
measurements. In this study, tests were conducted to investigate the effect of water content and
dry densities on p-wave velocity and unconfined compressive strength and thus, to establish a
correlation between p-wave velocity and unconfined compressive strength. Marine clay was the
material used for the study. Experiments were conducted on a soil sample of diameter 3.75 cm
and length 7.5 cm. Results show that p-wave velocity and unconfined compressive strength dem-
onstrate a similar trend while varying the parameters such as water content and dry density. The-
oretical correlations connecting elastic wave velocities and electrical resistivity were established
for finding the shear wave velocities. Gassmann’s and Archie’s equations for porosity were used
to derive the equation for shear wave velocity. Electrical resistivity was calculated experimentally.
1 INTRODUCTION
Elastic waves are generated in nature by the movement of tectonic plates, explosions, land-
slides and so on. Seismic waves are the vibrations generated at the interior of the earth, dur-
ing rupture or explosions, which take energy away from the center of the earth. Vibrations
which travel through the interior of the earth rather than surface are called body waves. Body
waves are classified into two, primary waves or compression waves (p-waves), and second-
ary or shear waves (s-waves). The determination of elastic waves and their movement is an
important parameter for different field applications such as insertion of deep foundations
in soil, soil stabilization, compaction characteristics, anisotropic behavior of soils, stiffness
evaluation of soils, sample quality determination, stratification of soils and so on.
These waves can also be modeled artificially by using different piezoelectric transducers in the
field as well as in the laboratory. P-wave velocity is mainly determined using ultrasonic meth-
ods. Ultrasonic methods are usually used for assessing concrete quality and strength determina-
tion (Lawson et al., 2011). Similarly, ultrasonic methods can also be used for investigating the
compaction characteristics of soils, finding that the variation of velocity with water content is
similar to the variation of density with water content (Nazli et al., 2000). P-wave velocity, shear
wave velocity and damping characteristics can be estimated by different calibrated ultrasonic
equipment (Zahid et al., 2011). Elastic wave velocities are determined by piezo disk elements
and electrical resistivity by electrical resistivity probes. A theoretical correlation connecting the
elastic wave velocity and electrical resistivity was proposed by Jong et al. (2015).
The focus of this study was to find the primary wave velocity of soft soil using an ultra-
sonic pulse velocity test at different water contents and dry densities and to correlate the
primary wave velocities to the unconfined compressive strength of the soil. The shear wave
velocity of the soil was determined by proposing a new equation for shear wave velocity
which connects elastic wave velocity and electrical resistivity.
271
Properties Values
2 MATERIALS USED
The soil used for the study was marine clay, which was collected from Kochi. It is blackish in
color. Figure 1 shows the soil used for this study.
The soil properties were determined and are listed in Table 1.
3 TEST DESCRIPTION
To prepare specimens of diameter 3.75 cm and length 7.5 cm for the unconfined compressive
test and ultrasonic pulse velocity test, three densities 1.62 g/cc, 1.5 g/cc and 1.4 g/cc were fixed
from the compaction curve. The compaction curve is shown in Figure 2. Water content cor-
responding to these densities were taken from the wet and dry sides of the compaction curve
and are tabulated in Table 2.
272
1 1.62 20
2 1.5 17.5
3 1.5 21.4
4 1.4 15.3
5 1.4 23
273
where α is the cementation factor taking values between 0.6 and 3.5, ‘m’ is the shape factor
indicating the shape of the particle, porous structure and specific surface. It takes values in
the range of 1.4–2.2. The only term which connects the electrical resistivity and elastic wave
velocity is the porosity. Movement of waves results volume change in soil. Gassmann sug-
gested (Jong et al., 2015) an equation for bulk modulus in terms of bulk modulus of soil
grain (Bg), bulk modulus of soil skeleton (Bsk), bulk modulus of pore fluid (Bf) and porosity:
Bsk
1 − Bg
BGassmann = Bsk + (2)
n Bsk 1 − n
− +
Bf Bg 2 Bg
Gassmann suggested another equation for bulk density in terms of the elastic wave velocities:
BGassmann = ρ (Vp² -4/3 Vs²) (3)
From Equation 1:
nArchie = Emix1/–m/α Eel (Jong et al., 2015) (5)
By equating Equations 4 and 5, a new equation for finding the shear wave velocity is
derived.
The equation for shear wave velocity can be written as:
( ( (
Vs = √ 3 √ E mix1/( − m ) ( ρVp 2 − Bsk ) Bg − B f Bg 2
)
(( )
− Bg − Bsk ρVp 2 + Bg 2 − Bsk 2 × α Eel Bg B f ) )) (6)
(2 √ ρ √((E mix
1/( − m )
(B
g ) ) ( )
− B f Bg − Bg − Bsk α Eel Bg B f
2
))
274
4.1.2 V
ariation of ultrasonic pulse velocity and unconfined compressive strength with varying
water contents
Table 3 shows obtained p-wave velocity and unconfined compressive strength.
As the water content increases from 17.5 to 21.4%, p-wave velocity and the unconfined
compressive strength decreases, for the dry density of 1.5 g/cc. For 1.4 g/cc, water content
increases from 15.3 to 23%, while both p-wave velocity and unconfined compressive strength
decreases. Waves will propagate faster in solids than liquids for the same dry density.
275
Dry density and water content Electrical resistivity (Ωm) Shear wave velocity (m/s) Vs
5 CONCLUSIONS
P-wave velocity and shear wave velocity are important parameters in geotechnical engineer-
ing and can be used to predict the properties of soil without sampling and testing. The ultra-
sonic pulse velocity test is mainly used to determine the p-wave velocity.
As the dry density increases, both compression wave velocity and unconfined compres-
sive strength increase for both dry of optimum and wet of optimum water contents. This is
because the wave transmission through solids is faster than through voids. P-wave velocity
and unconfined compressive strength decrease with an increase in moisture content for the
same dry density. The decrease in p-wave velocity is due to the slower rate of wave propaga-
tion in liquids than in solids. Therefore, the ultrasonic pulse velocity can be used as a param-
eter to estimate unconfined compressive strength of soil indirectly.
A theoretical correlation connecting elastic wave velocities and electrical resistivity was
used for finding the shear wave velocities. It was found that the shear wave velocity of the soil
varies with changes in the dry density and water content of the soil.
REFERENCES
Jong, S.L. & Yoon, M.H. (2015). Theoretical relationship between elastic wave velocity and electrical
resistivity. Journal of Applied Geophysics, 116, 51–61.
Lawson, K.A., Danso, H.C., Odoi, C.A. & Quashie, F.K. (2011). Non-destructive evaluation of con-
crete using ultrasonic pulse velocity. Research Journal of Applied Sciences, Engineering and Technol-
ogy, 3(6), 499–504.
Nazli, Y. Inci, G. & Miller, C.J. (2000). Ultrasonic testing for compacted clayey soils. Journal of Geotech-
nical and Geoenvironmental Engineering, ASCE, 287, 5, 54–68.
Nazli, Y., James, L.H. & Mumtaz, A.U. (2003). Ultrasonic assessment of stabilized soils. Journal of
Geotechnical and Geoenvironmental Engineering, ASCE, 301, 14,170–181.
Zahid, K., Cascante, G. & Naggar, M.H. (2011). Measurement of dynamic properties of stiff specimens
using ultrasonic waves.Canadian Geotechnical Journal, 48, 1–15.
276
ABSTRACT: Pile foundations are deep foundations constructed to transfer loads from the
superstructure to the hard strata beneath. The common loads acting on pile vertical, lateral and
uplift loads. As the vertical load on the pile changes, variations are shown by the pile in its lateral
behaviour. As the vertical load increases, the lateral capacity of the piles also tends to increase.
The lateral behaviour of the pile under combined loading depends on various factors such as
the order of loading, properties of the soil and pile. Even though the vertical force is the most
common force acting on a pile, uplift forces are also seen, especially in foundations of structures
in a harbor and in the case of submerged platforms of waterfront structures. Hydrostatic pres-
sure, overturning moments, lateral force and swelling of surrounding soil cause uplift of piles. In
the case of sandy soils, uplift resistance depends on the skin friction of piles and is determined
by considering the shape of the failure surface, the shear strength of the soil and weight of the
pile. This paper presents a review of the research works conducted on pile foundations subject
to combined (vertical and lateral) loading conditions and foundations subject to uplift loads.
1 INTRODUCTION
Pile foundations are slender members provided to transfer a load from the super structure to
the hard strata beneath. Lateral loads are also experienced in pile foundations of some struc-
tures. High velocity wind, dynamic earthquake pressure, soil pressure and so on, can cause lat-
eral loads. Practically, when lateral loading is present on pile foundations, the actual condition
occurring involves combined vertical and lateral loading on the structures. Hence, the behavior
of piles under combined loading is an important matter to be considered during the structural
design of piles. Many theoretical methods exist to evaluate the behavior of piles under different
loads such asaxial-compressive, axial-uplift and lateral loading. To minimize the complexity
in analyzing different loads simultaneously, loads are analyzed separately. Vertical loads on
piles are analyzed to determine the bearing capacity and lateral loads to determine the elastic
behavior of piles. However, in the field, lateral loads are of a large order, hence the effect of
combined loading cannot be neglected. Foundations of retaining walls, anchors for bulkheads,
bridge abutments, piers, anchorage for guyed structures and offshore structures, supported on
piles are exposed to large inclined uplift loads. Most of the studies on piles are concentrated
on vertical loads rather than uplift forces. Uplift forces may result either from a lateral force or
direct pull out, hydrostatic pressure, overturning moments, lateral force and swelling of the sur-
rounding soil. The effect of geometric properties like slenderness ratio on the lateral capacity
of piles under combined lateral and vertical loading has also been a topic of study. This paper
presents a review on the research works addressing both of these aspects.
Combined loading effects on piles have been a matter of study for the last ten decades.
Trochanis et al. (1991) used 3D FE to study the effect of combined loading on piles and
277
Figure 1. Effect of vertical load on lateral capacity (Source: Karthigeyan et al., 2007).
Figure 2. The lateral behavior of piles under vertical load for various slenderness ratios (Source:
Karthigeyan et al., 2006).
278
(2017) conducted laboratory tests to investigate the lateral behavior of piles under axial-
compressive loads and reported that the lateral load carrying capacity of piles improved with
an increase in axial load and slenderness ratio. Hazzar et al. (2017) used 3D FD to study the
effect of axial loads on the lateral behavior of pile foundations in sand and concluded that the
behavior of piles in the lateral direction is not affected by the axial loads acting on the piles.
The study of the behavior of a single pile under uplift loads and the factors affecting the
uplift capacity of piles needs to be improved. Ayothiraman and Reddy (2015) investigated
pile behavior under combined uplift and lateral loading in sand. The results concluded that
the uplift load v/s axial displacement behavior is nonlinear for independent loading and in
the case of combined loading. The behavior of a single pile, square pile group and hexago-
nal pile group under independent lateral and uplift loading, and also under combined uplift
and lateral loading were studied by Parekh and Thakare (2016). Based on their results, they
concluded that, with an increase in L/D ratio of the pile, the uplift load capacity of a single
pile increases linearly upto a certain point for both independent uplift loading and combined
loading with a constant lateral load. Figure 3 shows the variation of uplift load capacity
of single piles with an L/D ratio. To determine the ultimate uplift capacity of piles in sand,
Chattopadhyay and Pise (1986) proposed an analytical method with an assumed curved fail-
ure surface through the soil and determined the effects of factors like L/D ratio, the angle of
shearing resistance and pile friction angle on the ultimate uplift capacity of the pile. Das and
Seeley (1975) conducted some model tests in loose granular soil for determining the ultimate
uplift capacity of vertical piles under axial pull. The results include the variation of unit
uplift skin friction with the embedment depth. Rao and Venkatesh (1985) conducted labora-
tory studies in uniform sands to find the uplift behavior of short piles and reported that the
uplift capacity of piles was found to increases with the L/D ratio, the density of soil, particle
size and pile roughness.
4 CONCLUSION
• In non-cohesive soils, the lateral behavior of piles is influenced by the vertical load acting
on them which shows an increase in the lateral load carrying capacity.
• The effect of vertical load depends on the sequence of loading, geometric properties of the
pile and material properties of the soil.
• Lateral load carrying capacity shows an increasing trend with an increase in vertical load
as well as slenderness ratio.
279
REFERENCES
Achmus, M. & Thieken, K. (2010). On the behaviour of piles in non-cohesive soil under combined hori-
zontal and vertical loading. ActaGeotechnica, 5(3), 199–210.
Anagnostopoulos, C. & Georgiadis, M. (1993). Interaction of axial and lateral pile responses. Journal of
Geotechnical Engineering, 119(4), 793–798.
Ayothiraman, R. & Reddy, K.M. (2015). Experimental studies on behavior of single pile under com-
bined uplift and lateral loading. Journal of Geotechnical and Geo environmental Engineering, ASCE,
141, 1–10.
Chattopadhyay, B.C. & Pise, P.J. (1986). Uplift capacity of piles in sand. Journal of Geotechnical Engi-
neering, ASCE, 112(9), 888–904.
Das, B.M. & Seeley, G.R. (1975). Uplift capacity of buried model piles in sand. Journal of GTE Div.
ASCE, 10, 1091–1094.
Hazzar, L., Hussien, M.N. & Karray, M. (2017). Influence of vertical load on lateral response of pile
foundation in sands and clays. Journal of Rock Mechanics and Geotechnical Engineering, 9, 291–304.
Hussien, M.N., Tobita, T., Iai, S. & Karray, M. (2014a). Influence of pullout loads on the lateral
response of pile foundation. Proceedings of the 67th Canadian Geotechnical International Conference.
Hussien, M.N., Tobita, T., Iai, S. & Karray, M. (2014b). On the influence of vertical loads on the lateral
response of pile foundation. Computers and Geotechnics, 55, 392–403.
Hussien, M.N., Tobita, T., Iai, S. & Rollins, K.M. (2012). Vertical load effect on the lateral pile group
resistance in sand response. Geomechanics and Geoengineering, 7(4), 263–282.
Karthigeyan, S., Ramakrishna, V.V.G.S.T. & Rajagopal, K. (2006). Influence of vertical load on the
lateral response of piles in sand. Computers and Geotechnics, 33, 121–131.
Karthigeyan, S., Ramakrishna, V.V.G.S.T. & Rajagopal, K. (2007). Numerical investigation of the effect
of vertical load on the lateral response of piles. Journal of Geotechnical and Geoenvironmental Engi-
neering, 133(5), 512–521.
Maru, V. & Vanza, M.G. (2017). Lateral behaviour of pile under the effect of vertical load. Journal of
Information, Knowledge and Research in Civil Engineering, 4(2), 482–485.
Parekh, B. & Thakare, S.W. (2016). Performance of pile groups under combined uplift and lateral
loading. International Journal of Innovative Research in Science, Engineering and Technology, 5(6),
9219–9227.
Rao, K.S. & Venkatesh, K.H. (1985). Uplift behavior of short piles in uniform sand. Journal of Soils
and Foundations, 25(4), 1–7.
Trochanis, A.M., Bielak, J. & Christiano, P. (1991). Three-dimensional nonlinear study of piles. Journal
of Geotechnical Engineering ASCE, 117(3), 429–447.
280
G. Sreelekshmy Pillai
N.S.S. College of Engineering, Palakkad, Kerala, India
P. Vinod
Government Engineering College, Thrissur, Kerala, India
1 INTRODUCTION
Al-Khafaji (1993) conducted studies on fine grained soils from Iraq and the USA, to find
out the relationship between Atterberg limits and compaction characteristics at the standard
Proctor compactive effort. The following equations are recommended for Iraqi soils:
An empirical method was proposed by Blotz et al. (1998) for estimating γd-max and OMC of
fine grained soils at any rational compactive effort. They used the soil data from published
literature and their experimental study. The following equations were recommended for com-
paction characteristics at the standard Proctor compactive effort:
Gurtug and Sridharan (2004) brought out the effect of compaction energy on compaction
characteristics of fine grained soils. The analysis carried out was based on data from pub-
lished literature and experimental studies. The equations suggested were:
OMC = 0.92 wP (7)
γd-max = 0.98 γd-wP (8)
Here, γd-max was expressed in terms of γd-wP which is the Maximum Dry Unit Weight at Plas-
tic Limit Water Content, and is given by:
Studies were carried out by Sridharan and Nagaraj (2005) to determine which index prop-
erty correlated well with the compaction characteristics of fine grained soils at the standard
Proctor compactive effort. The correlation of the compaction characteristics with wP was
found to be much better than that with wL and the plasticity index. Using the data from their
study and from the literature, the following equations were recommended:
OMC = 0.92 wP (10)
γd-max = 0.23 (93.30 − wP) (11)
Based on the experimental studies on fine grained soils from different regions in Turkey,
and based on the published data, Sivrikaya (2008), also pointed out the importance of wP in
the prediction of compaction characteristics. The proposed equation for compaction charac-
teristics at the standard Proctor compactive effort being:
OMC = 0.94 wP (12)
γd-max = 21.97 − 0.27 OMC (13)
Dokovic et al. (2013) performed the standard Proctor tests on samples of fine grained
soils from Serbia. Multiple regression analysis was carried out to determine the relationship
between Atterberg limits and compaction characteristics, the empirical equation being:
282
By defining a fine grained soil by its wL and wP, the interrelationship between compaction
energy and compaction characteristics was brought within a definite framework by Sreelekshmy
Pillai and Vinod (2016), using data from literature. The recommended equations for compaction
characteristics of fine grained soils at the standard Proctor compactive effort were:
where γd-wL and γd-wP are Maximum Dry Unit Weight at wL and wPWater Contents, respectively.
Through a review and analysis of reported literature on compaction characteristics of fine
grained soils, Vinod and Sreelekshmy Pillai (2017) showed that Toughness Limit (wT) (which
is a function of wL and wP) bears a good correlation with γd-max and OMC; the recommended
equations for compaction characteristics at the standard Proctor compactive effort being:
283
where,
and γd-wT represents the Maximum Dry Unit Weight at Toughness Limit Water Content.
Table 1 summarizes the empirical equations proposed by various researchers, source of
data used by them in the development of correlations and the number of data points used.
A critical review of the reported methods for prediction of compaction characteristics of
fine grained soils is presented in the following section.
In order to determine the accuracy and precision of the above correlations, compaction char-
acteristics of 493 fine grained soils reported in the literature review, along with their index
properties, was used. The data was obtained from the following sources: McRae (1958); Wang
and Huang (1984); Daniel and Benson (1990); Al-Khafaji (1993); Daniel and Wu (1993); Ben-
son and Trast (1995); Blotz et al. (1998); Gurtug and Sridharan (2004); Sridharan and Nagaraj
(2005); Horpibulsuk et al. (2008); Sivrikaya (2008); Gunaydin (2009); Roy et al. (2009); Patel
and Desai (2010); Datta and Chattopadhyay (2011); Beera and Ghosh (2013); Varghese et al.
(2013), Shirur and Hiremath (2014); Talukdar (2014); and Nagaraj et al. (2015).
According to Cherubini and Giasi (2000), a logical assessment of the validity of any empiri-
cal correlation can be made by an evaluation technique which simultaneously takes into con-
sideration accuracy as well as precision. Accuracy can be estimated by the Mean Value (µ)
and precision by means of Standard Deviation (s). A global evaluation of the accuracy of a
correlation can then be made by two different indices. Ranking Distance (RD), (Cherubini &
Orr, 2000) and Ranking Index (RI) (Briaud & Tucker, 1998) can be defined as follows:
predicted value
2
predicted value
RD = 1 − µ +s
2
observed value observed value (25)
predicted value predicted value
RI = µ ln + s ln (26)
observed value observed value
For a good correlation, both of these indices tend to zero. The µ, s, RD and RI of the
predicted to observed values of γd-max and OMC were calculated and used to compare the
relative accuracy and precision of the above correlations (Equation 1 through Equation 8
and Equation 10 through Equation 23). A summary of the results are given in Tables 2 and 3.
It was seen that the correlations which used wT, and those that used both wL and wP as
input parameters, provided greater accuracy and precision than based on wL and wP alone.
However, as far as the prediction of γd-max is concerned, all the reported equations, except
that of Blotz et al. (1998) and Gunaydin (2009), were seen to yield satisfactory results.
More accurate and precise empirical correlations for the prediction of compaction charac-
teristics of fine grained soils was developed, taking into consideration of all the data points
hitherto reported in the literature (493 in total). The subsequent multiple linear regression
analyses resulted in the following equations:
OMC = 0.623 wT (27)
γd-max = 1.15 γd-wT (28)
284
Source μ s RD RI
Source μ s RD RI
The RD and RI values of the predicted to observed values of OMC (as per Equation 27) were
0.19 and 0.18, respectively. The RD and RI values were better than that corresponding to all
reported equations in the literature review. For γd-max, RD and RI values of predicted to observed
values (as per Equation 28) were 0.09 and 0.09, respectively. These values are highly satisfactory.
4 CONCLUSIONS
A critical review and analysis of the published literature on compaction characteristics of fine
grained soils at the standard Proctor compactive effort has led to the following conclusions:
• The accuracy and precision of OMC prediction, at the standard Proctor compactive effort,
is better when wL along with wP is used.
• Among the various reported correlations, those proposed by Dokovic et al. (2013) and
Vinod and Sreelekshmy Pillai (2017) are the most accurate and precise. With the use of wT,
which effectively takes care of the combined effect of wL and wP, a new set of equations
for the prediction of compaction characteristics at the standard Proctor compactive effort
have also been proposed,and are as follows:
• OMC = 0.623 wT
• γd-max = 1.15 γd-wT
285
Al-Khafaji, A.N. (1993). Estimation of soil compaction parameters by means of Atterberg limits. Quar-
terly Journal of Engineering Geology, 26, 359–368.
Benson, C.H. & Trast, J.M. (1995). Hydraulics conductivity of thirteen compacted clays. Clay Minerals,
4(6), 669–681.
Beera, A.K. & Ghosh, A. (2011). Regression model for prediction of optimum moisture content and maxi-
mum dry unit weight of fine grained soil. International Journal of Geotechnical Engineering, 5, 297–305.
Blotz, R.L., Benson, C.H. & Boutwell, G.P. (1998). Estimating optimum water content and maxi dry unit
weight for compacted clays. Journal of Geotechnical and Geoenvironmental Engineering, 124(9), 907–912.
Briaud, J.L. & Tucker, L.M. (1998). Measured and predicted axial load response of 98 piles. Journal of
Geotechnical Engineering, 114(9), 984–1001.
Cherubini, C. & Giasi, C.I. (2000). Correlation equations for normal consolidated clays. A Nakase &
T. Tschida (Ed(s))., International coastal geotechnical engineering in practice Vol. 1, pp. 15–20. Rot-
terdam, The Netherlands: A.A Balkema.
Cherubini, C. & Orr, T.L.L. (2000). A rational procedure for comparing measured and calculated values
in geotechnics. A. Nakase & T. Tschida (Ed(s).), International coastal geotechnical engineering in
practice, Vol. 1, pp. 261–265). Rotterdam, The Netherlands: A.A Balkema.
Daniel, D.E. & Benson, C.H. (1990). Water content—density criteria for compacted soil liners. Journal
Geotechnical Engineering, 116(12), 1181–1190.
Daniel, D.E. & Wu, Y.K. (1993). Compacted clay liners and covers for arid sites. The Journal of Geotech-
nical Engineering, 119(2), 223–237.
Datta, T. & Chattopadhyay, B.C. (2011). Correlation between CBR and index properties of soil, D.K.
Sahoo, T.G.S. Kumar, B.M. Abraham & B.T. Jose (Ed(s).), Proceedings of Indian Geotechnical Con-
ference, Kochi, India (pp. 131–133).
Dokovic, K., Rakic, D. & Ljubojev, M. (2013). Estimation of soil compaction parameters based on the
Atterberg limits. Mining and Metallurgy Engineering Bor, 4, 1–7.
Gunaydin, O. (2009). Estimation of soil compaction parameters by using statistical analysis and artifi-
cial neural networks. Environmental Geology, 57, 203–215.
Gurtug, Y. & Sridharan, A. (2004). Compaction behaviour and prediction of its characteristics of fine
grained soils with particular reference to compaction energy. Soils and Foundations, 44(5), 27–36.
Horpibulsuk,S., Katkan,W. & Apichatvullop, A. (2008). An approach for assessment of compaction
curves of fine grained soils at various energies using one point test. Soils and Foundations. Japanese
Geotechnical Society, 48(1),115–126.
McRae, J.L. (1958). Index of compaction characteristics. Symposium on application of soil testing in
highway design and construction (Vol. 239, pp. 119–127). Philadelphia; ASTM STP.
Nagaraj, H.B., Reesha, B., Sravan, M.V. & Suresh, M.R. (2015). Correlation of compaction character-
istics of natural soils with modified plastic limit. Transportation Geotechnics, 2, 65–77.
Patel, R.S. & Desai, M.D. (2010). CBR predicted by index properties for alluvial soils of south Gujarat.
R. Beri (Ed.), Proceedings of Indian Geotechnical Conference, Mumbai, India (Vol. 1, pp. 79–82).
Roy, T.K., Chattopadhyay, B.C. & Ro, S.K. (2009). Prediction of CBR from compaction characteristics
of cohesive soil. Highway Research Journal, 7–88.
Shirur, N.B. & Hiremath, S.G. (2014). Establishing relationship between CBR value and physical prop-
erties of soil. Journal of Mechanical and Civil Engineering, 11, 26–30.
Sivrikaya, O. (2008). Models of compacted fine-grained soils used as mineral liner for solid waste. Envi-
ronmental Geology, 53, 1585–1595.
Sreelekshmy Pillai, G.A. & Vinod, P. (2016). Re-examination of compaction parameters of fine grained
soils. Ground Improvement, 169(3), 157–166.
Sridharan, A. & Nagaraj, H.B. (2005). Plastic limit and compaction characteristics of fine grained soils.
Ground Improvement, 9(1), 17–22.
Talukdar, D.K. (2014). A study of correlation between California Bearing Ratio (CBR) value with other
properties of soil. International Journal of Emerging Technology and Advanced Engineering, 4, 559–562.
Varghese, V.K., Babu, S.S., Bijukumar, R., Cyrus, S. & Abraham, B.M. (2013). Artificial neural net-
works: A solution to the ambiguity in prediction of engineering properties of fine grained soils.
Geotechnical and Geological Engineering, 31, 1187–1205.
Vinod, P. & Sreelekshmy Pillai, G. (2017). Toughness limit: A useful index property for prediction
of compaction parameters of fine grained soils at any rational compactive effort. Indian Geotech
Journal. 47(1), 107–114.
Wang, M.C., ASCE, M. & Huang, C.C. (1984). Soil compaction and permeability prediction models.
Journal of Environmental Engineering, 110(6), 1063–1082.
286
V. Jaya
Department of Civil Engineering, Government Engineering College, Barton Hill, India
ABSTRACT: Stiffness of the base and sublayers is an important parameter in the design and
quality assurance of pavements. In existing pavements, prior to resurfacing, it is essential to
know the condition of the base that has been subject to traffic loading and environmental condi-
tions. When failure of a pavement occurs, a quick and accurate measurement of the properties
of the base layer is essential. The most popular methods are CBR, dynamic cone penetration
and resilient modulus tests. The bender element technique, which is applicable to a wide vari-
ety of soils, can be used on existing pavements under construction in the field. To accept a
method for design purposes, it should be validated with the conventional methods such as the
penetration methods and CBR method. This study aims at developing a correlation between the
Dynamic Penetration Index (DPI) and CBR values with the shear modulus obtained from this
method. A correlation of these values with the shear modulus can be of use to our road sector.
Keywords: shear modulus, bender element method, subgrade CBR, dynamic cone penetra-
tion testing
1 INTRODUCTION
The performance of pavements depends on the properties of the subgrade soil and pavement
materials. Accurate measurement of the properties of the base layer is essential to avoid
failure of the subgrade. Shear modulus is a soil characteristic, which determines the strength
of the subgrade and hence, its measurement is required for the design and construction of
pavements.
The shear modulus of subgrade soil can be determined in a laboratory using the theory of
wave propagation. Piezo ceramic sensors known as bender elements are currently used for the
determination of the shear modulus of soil.
A pair of in-line bender elements is usually used, where one acts as the transmitter sending
off the shear waves, while the other on the opposite end captures the arriving waves.
Dynamic Cone Penetrometer (DCP) results have never been used as an absolute indica-
tor of the in situ strength or stiffness of a material in a pavement or subgrade. The Califor-
nia Bearing Ratio (CBR) value and DPI interpretation are important soil parameters for
the design of flexible pavements and airstrips. It can also be used for determination of the
subgrade reaction of soil by using correlation. It is essential to familiarize this simple wave
propagation technique and correlate with CBR such that shear modulus can be used by prac-
ticing engineers for design and performance assessment of pavements.
This paper focuses on the measurement of the shear modulus of soil using bender ele-
ments and the separate development of a correlation with CBR and DPI.
The incorporation of bender elements in a triaxial apparatus is arguably the most common
practice for the determination of shear modulus of soil samples in the laboratory, as demon-
287
Experiments were conducted in the laboratory to determine the shear modulus, DPI and
CBR value of selected subgrade soil. The properties of the materials, apparatus and method-
ology are explained in the following sections.
2.1 Materials
The soil samples used in this test were collected from selected subgrade locations in Thiruvanan-
thapuram, Kerala. The physical and engineering properties of the soil were determined accord-
ing to IS methods. The field density and specific gravity were 2.43 g/cc and 2.58, respectively.
Based on IS:1498-1970, the soil samples were found to be gravelly sand.
2.2 Methodology
The bender element test was carried out on unconfined specimens in a triaxial cell. A pair
of in-line bender elements were fixed in the test apparatus, where one acts as the transmitter
sending off the shear waves, while the other on the opposite end captures the arriving waves.
A wave generator was used to transmit sine waves into the soil samples and received signals
as waveforms on an oscilloscope. These recorded waveforms were used for interpretation of
travel time through the sample. The shear wave velocity (vs) was derived by dividing the travel
distance of the waves (between the transmitter and receiver) with the arrival time, which in
turn is squared and multiplied with the specimen’s bulk density to obtain the shear modulus.
A mathematical description of the sets of variables is the best method of scientific expla-
nation, as prior to this, in a graphical presentation there was always an element of bias or
misleading presentation (Barua & Patgiri, 1996). The study of regression enables us to get a
close functional relation between two or more variables (Kapur & Saxena, 1982).
Compaction properties were determined by a standard Proctor test as per IS:2720 (Part VII).
Unsoaked CBR values of the soil sample were determined as per the procedure laid down in
IS: 2720 (Part XVI) (1979). Dynamic cone penetration tests were conducted on the samples
according to IS:4968 (Part I) (1978).
From the results obtained, we can see that the variation of OMC with respect to CBR val-
ues was found to be inversely proportional. Variations of compaction properties with respect
to corresponding CBR values and shear modulus were determined.
From Figure 1, it may be noted that when the value of the CBR decreases there is an
increase in optimum moisture content. It was also observed that the CBR value has a signifi-
cant correlation with OMC. The relation between CBR and OMC is Y = 347.02x−1.243 with
an R2 value of 0.9503.
From Figure 2, it may be noted that the shear modulus of soil shows a similar trend
of variation with optimum moisture content. When the water content increases the shear
stiffness of the soil reduces. The relation between CBR and OMC is Y = 2.2604x-.65 with an
R2 value of 0.6708.
288
289
290
Figure 5. Final correlation graph between small strain shear modulus and penetration index.
Figure 7. Comparison between experimentally determined and predicted small strain shear modulus.
291
From the literature study, it was evident that we needed to integrate the engineering behavior
of the hitherto uncorrelated tests (DCP testing, CBR method and bender element test). To
extend the application of the bender element test in the field of design of pavement subgrade,
validation with a conventional testing technique such as the DCP testing and CBR testing
was necessary. The main conclusions, highlighting the novelty of the work, are briefly men-
tioned as follows:
• The CBR values decrease with an increase in OMC and a similar trend is also observed for
shear modulus.
• The decrease of shear modulus with an increase of water content can predict the soil
strength degradation due to the presence of moisture in pavements.
• The shear modulus of the soil samples increases with a decrease in DPI after the develop-
ment of a linear regression model.
• The shear modulus of soil predicted using the derived relationship was found to be 90% in
agreement with the experimental values when the DPI was considered as the parameter.
REFERENCES
Cahyadi, J. E.C. Leong & H. Rahardjo (2009). Measuring shear and compression wave velocities of soil
using bender-extender elements. Canadian Geotechnical Testing Journal, 46, 792–812.
Dyvik, R. & Madshus, C. (1985). Proceedings of ASCE Convention on Advances in the Art of Testing
Soils under Cyclic Conditions. Detroit, Michigan, 64, 186–196.
Halsko, H.A. & Zeng, X. (2010). Piezoelectric probe for measurement of soil stiffness. International
Journal of Pavement Engineering, 11(1).
Jaya, V., Dodagoudar, G.R. & Boominathan, A. (2007). Estimation of maximum shear modulus of
sand from shear wave velocity measurements by bender elements. Indian Geotechnical Journal, 37(3),
159–173.
Lee, J.S. & Santa Marina, J.C. (2005). Bender elements: Performance and signal interpretation. Journal
of Geotechnical and Geoenvironmental Engineering, ASCE, 131, 1090–0241.
Roy, T.K., Chattapadhyay, B.C. & Roy, S.K. (2010). California bearing ratio, evaluation and estimation:
A study of comparison. Paper presented at the Indian Geotechnical Conference, Indian Institute of
Technology, Mumbai, 19–22.
Sawangsuriya, A., Bosscher, P.J. & Edil, T.B. (2005). Alternative testing techniques for modulus of
pavement bases and subgrades geotechnical applications for transportation infrastructure. ASCE,
Geotechnical Practice Publications, 3, 108–121.
292
P.S. Sreedhu Potty, Mayuna Jeenu, Anjana Raj, R.S. Krishna, J.B. Ralphin Rose,
T.S. Amritha Varsha, J. Jayamohan & R. Deepthi Chandran
LBS Institute of Technology for Women, Thiruvananthapuram, India
ABSTRACT: The necessity of in-situ treatment of foundation soil to improve the bearing
capacity has increased considerably due to non-availability of good construction sites. Soil
confinement is one such method of soil improvement which can be economically adopted.
The improvement in bearing capacity and reduction in settlement of footings resting on clay
due to the addition of a laterally confined granular soil layer underneath, is investigated by
carrying out a series of finite element analyses using the FE software PLAXIS 2D. The influ-
ence of parameters like radius, depth etc. of laterally confined granular soil layer is studied.
It is observed that the load-settlement behaviour of isolated footings resting on clay can be
considerably improved by providing a laterally confined granular soil layer underneath it.
1 INTRODUCTION
The decreasing availability of good construction sites has led to increased use of sites with
marginal soil properties. The necessity for in situ treatment of foundation soil to improve its
bearing capacity has increased considerably. The soil confinement is a promising technique
of improving soil capacity. This technique of soil confinement, though successfully applied
in certain areas of soil engineering, has not received much attention in foundation applica-
tions. In the last few decades, great improvements in foundation engineering have occurred,
along with the development of new and unconventional types of foundation systems through
considerations of soil-structure interaction. In the past few decades many researches have
been carried out to investigate the improvement in bearing capacity due to confining the
underlying soil. It has been proved that by confining the soil there is a reduction in the settle-
ment resulting in an increase in bearing capacity.
Much research has been carried out on soil reinforced with geosynthetics (Mahmoud
and Abdrabbo (1989), Khing et al. (1993), Puri et al. (1993), Das and Omar (1993), Dash
et al. (2001a&b), Schimizu and Inui (1990), Mandal and Manjunath (1995), Mahmoud
and Abdrabbo (1989), Rajagopal et al. (1999)). Several authors have also studied strip
foundations but reinforced with different materials (Verma and Char, 1986, Dawson and
Lee, 1988).
Sawwaf and Nazer (2005) studied the behavior of circular footing resting on confined
sand. They used confining cylinders with different heights and diameters to confine the sand.
Krishna et al. (2014) carried out laboratory model tests on square footings resting on later-
ally confined sand. Vinod et al. (2007) studied the effect of inclination of loads on footings
resting on laterally confined soil.
In this research, the beneficial effects of providing a laterally confined granular layer
underneath a footing resting on clay are investigated by carrying out nonlinear finite element
analysis using the FE software PLAXIS 2D. The influence of dimensions of the laterally
confined granular soil layer on the load-settlement behaviour, axial force distribution in the
293
Finite element analyses are carried out using the commercially available finite element soft-
ware PLAXIS 2D. For simulating the behaviour of soil, different constitutive models are
available in the FE software. In the present study Mohr-Coulomb model is used to simu-
late soil behaviour. This non-linear model is based on the basic soil parameters that can be
obtained from direct shear tests; internal friction angle and cohesion intercept. Since circular
footing is used, axi-symmetric model is adopted in the analysis. The settlement of the rigid
footing is simulated using non zero prescribed displacements.
The displacement of the bottom boundary is restricted in all directions, while at the verti-
cal sides; displacement is restricted only in the horizontal direction. The initial geostatic stress
states for the analyses are set according to the unit weight of soil. The soil is modelled using
15 noded triangular elements.
Mesh generation can be done automatically. Medium mesh size is adopted in all the simu-
lations. The size of the strip footing (B) is taken as one metre and the width and depth of soil
mass are taken as 10B in all analyses.
The footing and the confining walls are modelled using plate elements. To simulate the
interaction between the confining walls and surrounding soil, an interface element is pro-
vided on both outer and inner surfaces. The geometric model is shown in Fig. 1 and the
typical deformed shape in Fig. 2. The soil is modelled using 15-node triangular elements.
Properties of locally available sand and clay are adopted in the analyses. The material proper-
ties adopted are outlined in Table 1.
The influence of dimensions of laterally confined soil on the load-settlement behaviour and
interaction between confining walls and surrounding soil are particularly studied. Various geo-
metric parameters studied are indicated in Fig. 3. The diameter of footing is B, radius and
thickness of confined soil are r and h respectively. The diameter of footing is taken as one metre
for all the cases. Various values of r and h adopted in the analyses are indicated in Table 2.
294
295
Figure 4. Vertical Stress vs Settlement Curves for various values of (h/B); when (r/B) = 1.
Figure 5. Vertical Stress vs Settlement Curves for various values of (r/B); when (h/B) = 1.5.
296
Figure 7. Distribution of shear stress at the interface between confining walls and Sand for various
values of (r/B).
It is seen that the normal stress increases with the radius of confining area upto (r/B) = 1.5
and thereafter reduces. When (r/B) = 1, the peak stress occurs at a height of B from base. For
higher values of (r/B), the point of peak stress shifts upwards and occurs at a height of 3B
from base.
Figure 7 presents the distribution of shear stress along the interface between confining
walls and sand. It is seen that the distribution of shear stress drastically changes with the
increase in radius of confining area. The pattern of soil movement within the confining area
changes with the increase in radius which influences the shear stress at interface.
It is observed that the distribution of normal and shear stresses at larger cell widths is very
different from that of smaller ell widths. This validates the observation of Vinod et al. (2007),
that at smaller cell width the confining cell—soil and footing behaves as a single unit (deep
foundation) and this behaviour changes as the radius of confinement is increased.
297
The following conclusions are drawn from the results of finite element analyses
1. The load-settlement behaviour considerably improves due to lateral confinement of under-
lying soil
2. The distribution of normal and shear stresses at the interface between confining walls and
the confined soil is considerably influenced by the radius of confinement.
REFERENCES
Binquet, J., and K.L. Lee (1975), “Bearing capacity tests on reinforced earth slabs.” Journal of Geotech-
nical Engineering Division, 101(12), 1241–1255.
Das, B.M., and M.T. Omar (1993), “The effects of foundation width on model tests for the bearing
capacity of sand with geogrid reinforcement.” Geotechnical and Geological Engineering, 12, 133–141.
Dash, S., N. Krishnaswamy and K. Rajagopal (2001 a), “Bearing capacity of strip footing supported on
geocell-reinforced sand.” Geotextile and Geomembrane, 19, 535–256.
Dash, S., K. Rajagopal and N. Krishnaswamy (2001 b), “Strip footing on geocell reinforced sand beds
with additional planar reinforcement.” Geotextile and Geomembrane, 19, 529–538.
Dawson, A. and R. Lee (1988), “Full scale foundation trials on grid reinforced clay,” Geosynthetics for
Soil Improvement. 127–147.
Khing, K.H., B.M. Das, V.K. Puri, E.E. Cook and S.C. Yen (1993), “The bearing capacity of a strip
foundation on geogrid-reinforced sand”, Geotextiles and Geomembranes, 12, 351–361.
Krishna, A., Viswanath, B. and Nikitha, K. (2014), “Performance of Square footing resting on later-
ally confined sand”, International Journal of Research in Engineering and Technology, Vol 3, Issue 6.
Mahmoud, M.A., and F.M. Abdrabbo (1989) “Bearing capacity tests on strip footing on reinforced
sand subgrade.” Canadian Geotechnical Journal, 26, 154–159.
Mandal, J.M., and V.R. Manjunath (1995), “Bearing capacity of strip footing resting on reinforced sand
subgrades.” Construction and Building Material, 9 (1), 35–38.
Puri, V.K., K.H. Khing, B.M. Das, E.E. Cook and S.C. Yen (1993), “The bearing capacity of a strip
foundation on geogrid reinforced sand.” Geotextile and Geomembrane, 12, 351–361.
Rajagopal, K., N. Krishnaswamy and G. Latha (1999), “Behavior of sand confined with single and
multiple geocells”, Geotextile and Geomembrane, 17, 171–184.
Sawwaf, M.E., and A. Nazer (2005), “Behavior of circular footing resting on confined granular soil.”
Journal of Geotechnical and Geoenvironmental Engineering, 131(3), 359–366.
Vinod, K.S., Arun, P., and Agrawal, R.K., (2007), “ Effect of Soil Confinement on Ultimate Bearing
Capacity of Square footing under eccentric-inclined load”, Electronic Journal of Geotechnical Engi-
neering, Vol. 12, Bund E.
298
H.S. Athira, V.S. Athira, Fathima Farhana, G.S. Gayathri, S. Reshma Babu,
N.P. Asha & P. Nair Radhika
Department of Civil Engineering, LBS Institute of Technology for Women, Trivandrum, India
ABSTRACT: Sheet pile walls are a common form of earth retaining structures. Earth pres-
sures developed on either side of the sheet pile wall ensure its moment and force equilibrium.
When the height of the earth that needs to be retained is rather high, the sheet pile walls are usu-
ally anchored near the top. On the other hand when the height is small, cantilever sheet pile walls
are employed. Contrary to the conventional methods, the study takes in account the stiffness and
structural capacity of sheet pile walls. The aim of the study is to analyse the deformation behav-
iour of cantilever and bulk head anchored sheet pile walls, for different depth of embedment
to height ratio by Finite Element Analysis using PLAXIS 2D software. It is observed that wall
deformation decreases when wall penetration depths are increased in cohesionless soils.
1 INTRODUCTION
Retaining walls are used to hold back soil and maintain a difference in the elevation of the
ground surface. The retaining wall can be classified according to system rigidity into either
rigid or flexible walls. A wall is considered to be rigid if it moves as a unit in rigid body and
does not experience bending deformations. Flexible walls are the retaining walls that undergo
bending deformations in addition to rigid body motion. Steel sheet pile wall is the most com-
mon example of the flexible walls because it can tolerate relatively large deformations.
Sheet pile walls are one of the oldest earth retention systems utilized in civil engineer-
ing projects. They consist of continuously interlocked pile segments embedded in soils to
resist horizontal pressures. Sheet pile walls are used for various purposes; such as large and
waterfront structures, cofferdams, cut-off walls under dams, erosion protection, stabilizing
ground slopes, excavation support system, and floodwalls. The sheet pile walls can be either
cantilever or anchored. The selection of the wall type is based on the function of the wall, the
characteristics of the foundation soils, and the proximity of the wall to existing structures.
While the cantilever walls are usually used for wall heights less than 6 m, anchored walls are
299
300
3 RESULTS
A. Cantilever wall
The analysis results in terms of, maximum horizontal displacements, and shear force with
increasing wall penetration depth, for the 6 m cantilever sheet pile wall in medium dense sand
soil are given in the following figures. The wall penetration depths D, were normalized by the
wall height H, for all cases.
Wall displacements: Figures show the effect of increasing wall penetration depth on maxi-
mum horizontal wall displacements. The analyses results indicate that a significant decrease
in the wall displacements is obtained with an increase in the wall penetration depth. These
reductions are relative to the deformations of a wall designed using the conventional design
methods. The results in these figures show that by increasing the wall penetration depth to
height ratio in medium dense sand soils to 0.6 reduces the horizontal wall displacement to
about 40% of the wall displacements observed when the ratio was 0.3. There is a considerable
reduction in the wall displacements when ratio is increased to 0.7.
The graphs of D/H ratios versus total horizontal displacement were plotted for different
D/H ratios.
301
302
Figure 7. Total horizontal displacement for different D/H ratios for anchored wall.
303
wall displacements for all cases studied is minimal because the anchored wall is not displaced
both at the bottom of the wall and also at the anchor position. Although the wall can bend
between these positions, the overall wall displacements will be quite small with increasing the
wall penetration depth due to these fixities.
4 CONCLUSIONS
A. Cantilever walls
Analysis of a 6 m cantilever sheet pile wall in medium dense sand was done using Plaxis 2D.
As seen in the figure, increasing the wall penetration depth decreases the wall displacements.
Maximum displacement was obtained when the D/H ratio was 0.3. This indicates that a sheet
pile wall with D/H ratio 0.3 has a greater probability of failure. At the same time, minimum
displacement was obtained for D/H ratio 0.7. But, providing a D/H ratio of 0.7 is not eco-
nomical. Whereas a D/H ratio of 0.6 reduces the wall displacement by about 40%. Though
this is not as much as that obtained from D/H ratio 0.7, it is still a significant reduction.
Hence providing a D/H ratio of 0.6 is ideal.
B. Anchor walls
In the analysis of a 9 m anchored sheet pile wall in medium dense sand using Plaxis 2D, it was
found that increase in wall penetration depth does in fact that increase in wall penetration
depth does in fact reduce wall displacement but the reduction is not much. It was seen that
the displacement in each case in itself is not much, no matter the D/H ratio. This is due to the
fixity provided by the anchor. Therefore, providing a lower D/H ratio like 0.45 would suffice.
REFERENCES
Amer, Hetham A. Ramadan (2013). “Effect of wall penetration depth on the behaviour of sheet pile
walls”: A Liteature Review.
Bransby P.L.& G.W.E. Milligan (1975). “Soil Deformations near Cantilever Sheet pile walls”, Geotech-
nique 25, No. 2 175–195.
GopalMadabhushi S.P. & V. S. Chandrasekaran (2008). “Centrifuge Testing of a Sheet Pile Wall with
Clay Backfill”. Indian Geotechnical Journal, 38(1), 1–20.
Omer Bilgin, P.E. & M. Asce (2012). “Lateral Earth Pressure Coefficients for Anchored Sheet Pile
Walls”.International Journal of Geomechanics Vol 12: 584–595.
Omer Bilgin, P.E., M. Asce & M. Bahadir Erten 2009).“Analysis of Anchored Sheet Pile Wall Deforma-
tions”. International Foundation Congress and Equipment Expo.
Prakash Kumar Gupta et al., (2017). “A study relation between soil and cantilever sheet pile. A model of
theory and designing”. International journal of engineering sciences & research technology.
304
V. Gokul
Department of Aerospace Engineering, Madras Institute of Technology, Chennai, India
ABSTRACT: Biologically inspired designs provide different set of strategies and tools to deal
with engineering problems. The humpback whale is one such bio-inspiring species, being the
most acrobatic of the baleen whales and capable of performing good maneuvers. The presence
of large rounded protuberances or tubercles along the Leading Edge (LE) of the humpback
whale flipper highlights its uniqueness. The LE tubercles act as passive flow control devices that
improve the performance and maneuverability of the flipper. The aerodynamic performance
of NACA airfoils such as NACA 0015 and NACA 4415, and modified airfoils with leading-
edge tubercles (BUMP 0015, 4415) are numerically investigated at a Reynolds number (Re) of
1.83 × 105. The popular commercial Computational Fluid Dynamics (CFD) tool FLUENT
was used. The post-stall and pre-stall characteristics are analyzed in terms of coefficients of lift
(CL) and drag (CD) with respect to various Angles of Attack (AoAs). Both airfoils, with and
without tubercles, are investigated. Comparisons of streamline distribution, pressure coeffi-
cient (CP), CL and CD between the baseline airfoils and the airfoils with tubercles help to explain
the momentum transfer characteristics of tubercles and hence how a stall is delayed.
Keywords: Bio inspired, Humpback Whale Flipper, Tubercles, Flow separation, Stall
1 INTRODUCTION
Tubercles are rounded protuberances of the Leading Edge (LE) that alter the flow field character-
istics around an airfoil. It has been suggested that tubercles on the flipper of the humpback whale
function as lift-enhancement devices, allowing the flow to remain attached for a larger Angle of
Attack (AoA), and thus delaying stall. The protuberances found along the LE of the humpback
whale flipper vary in amplitude and wavelength across the span. Further, the amplitude of the
protuberances ranges from 2.5% to 12% of the chord and the wavelength varies from 10% to 50%
of the chord. From a more morphological point of view, Fish and Battle (1995) proved that the
geometrical properties of the tubercles could influence the aerodynamic performance of a wing.
The wind tunnel experiments of Miklosovic et al. (2004) demonstrated drastic enhance-
ments in lift for a post-stall AoA, and also showed a delay in the stall angle of up to 40%.
These experiments performed at the Reynolds number (Re) in the range of 105. One of the
mechanisms of performance enhancement is believed to be the generation of stream-wise
vortices, which improve the momentum exchange in the boundary layer. The potential ben-
efits of tubercles on the aerodynamic performance of an airfoil were addressed by Bushnell
and Moore (1991). Over the last two decades, several studies have been performed experi-
mentally and numerically to assess the influence of tubercles. The first numerical study was
307
2 GEOMETRY
NACA 0015 and NACA 4415 profiles are used as the wing cross section. The leading-edge
tubercles design being formulated through unequal chord length. The ratio of amplitude (A)
to wavelength (λ), expressed as (η = A/λ), retains high priority in the design perspective of
tubercle research. The variable mean chord length (C), along with span-wise ordinates (z), are
elucidated by the wave equation:
C(z) = Acos(2πz /λ) + C
The chord length variation (∆c) along the span-wise direction is computed and is presented
in Figure 1.
308
3 MESHING
A boundary is created for airfoil design in which the computational domain extends 10C
upstream and 10C downstream. The top and bottom boundaries of the domain are located
10C away from the foil. The whole computational domain is discretized with an unstructured
grid with triangular mesh, as shown in Figures 4a and 4b.
∂U i ∂U i ∂P ∂ ∂U i ∂Rij
ρ +Uk =− + µ + (1)
∂t ∂ xk ∂xi ∂xi ∂x j ∂x j
where Rij is Reynolds stresses; U, p, ρ represents the velocity, pressure, density. The y+ values
for all the simulations are maintained below 3. Boundary conditions for numerical analysis
are pressure far field and wall.
To examine the influence of tubercles, the viscous fluid flow over a baseline airfoil is first
simulated. Then the airfoils with tubercles are simulated with the same Reynolds number.
The differing values in these simulations show the influence of the tubercles. The numerical
experiments are carried out at angles of attack of 0° to 21°. Favorable effects of tubercles are
reported on aerodynamic coefficients (CL and CD) and stalling angle.
Figure 4. (a) Mesh generated for cambered airfoil. (b) Mesh generated for symmetrical airfoil.
are visualized along the mid-span plane. The suction surface of the baseline airfoil is in a
separated condition when the AoA is increased, as highlighted in Figures 5a, 5b and 5c, and
Figures 9a, 9b and 9c. This applies to both NACA 0015 and NACA 4415 airfoils.
The streamlines of the peak and trough regions are captured at x/b values of 0.5 and 0.25,
respectively. For the BUMP 0015 airfoil at 0° angle of attack, the separation effect is not sig-
nificant. For BUMP 0015 at an AoA of 15°, in the peak region the attached flow covers the
suction surface, forming counter-rotating vortex structures, and the flow at the trough region
is detached for the same AoA, as shown in Figures 8a and 8b. By contrast, for the BUMP
4415 airfoil at an AoA of 15°, the peak region has attached flow but the trough region has a
more separated flow than the BUMP 0015 airfoil, as highlighted in Figure 12b.
The streamline patterns over peak and trough regions at AoAs of 0º and 10º indicate that
the flow is close to the surface, as shown in Figures 6, 7, 10 and 11, with meager amounts of
separation.
Figure 7. Streamlines of BUMP 0015 Figure 8. Streamlines of BUMP 0015
airfoil at α = 10°. airfoil at α = 15°.
angles of attack ranging from 11° to 15°. The cambered NACA 4415 airfoil stalls at 12°
and the BUMP 4415 airfoil stalls at 15°. The drag coefficient decreased for the BUMP
4415 airfoil in the post-stall condition.
311
312
5 CONCLUSION
The effect of leading-edge tubercles is studied on the pre-stall and post-stall behavior of an
airfoil. The formation of counter-rotating pairs of stream-wise vortices is said to enhance the
airfoil characteristics in comparison to baseline airfoils. During pre-stall, the presence of a
laminar separation bubble increases the lift of the airfoil with minimum drag effect. The impli-
cation is that tubercles on airfoils and wings helps to increase the lift beyond that of the baseline
airfoils or wings. The CL is improved from 20% to 30% for modified airfoils, as presented in
Figure 13. The bursting of the laminar separation bubble at the leading edge of the baseline
airfoil during the stall increases the drag with a significant loss in lift. In an airfoil employing
tubercles, the separation bubbles are restricted to trough regions between the tubercles. Because
the row of tubercles redirects the flow of air into the scalloped valley between each tubercle, it
causes swirling vortices that roll up and over the airfoil, which actually enhances lift properties.
Therefore, the tubercled airfoil does not stall so quickly. At post-stall angles of attack, wings
experience a reduction in lift due to flow separation. Post-stall lift can be attained in larger
amounts with the aid of tubercles with larger amplitude. The structure of the vortices increases
with the increase in amplitude of the tubercles. The stream-wise vortices convert the momen-
tum into flow, and the circulation of flow in the downstream direction increases. As a result, the
flow tends to attach to the upper surface of the wing, yielding a large increase in lift.
313
REFERENCES
Bushnell, D.M. & Moore, K.J. (1991). Drag reduction in nature. Annual Review of Fluid Mechanics, 23,
65–79.
Cai, C., Zuo, Z., Liu, S. & Wu, Y. (2015). Numerical investigations of hydrodynamic performance of
hydrofoils with leading-edge protuberances. Advances in Mechanical Engineering, 7(7), 1–11.
Corsini, A., Delibra, G. & Sheard, A.G. (2013). On the role of leading-edge bumps in the control of stall
onset in axial fan blades. Journal of Fluids Engineering, 135, 081104.
De Paula, A.A., Padilha, B.R.M., Mattos, B.D. & Meneghini, J.R. (2016). The airfoil thickness effect
on wavy leading edge performance. Paper presented at the 54th AIAA Aerospace Sciences Meeting,
AIAA SciTech, San Diego, CA. doi:10.2514/6.2016–1306.
Edel, R.K. & Winn, H.E. (1978). Observations on underwater locomotion and flipper movement of the
humpback whale Megaptera novaeangliae. Marine Biology, 48, 279–287.
Fish, F.E. & Battle, J.M. (1995). Hydrodynamic design of the humpback whale flipper. Journal of Mor-
phology, 225, 51–60.
Hansen, K.L., Kelso, R.M. & Dally, B.B. (2011). Performance variations of leading-edge tubercles for
distinct airfoil profiles. AIAA Journal, 49(1), 185–194.
Johari, H., Henoch, C.W., Custodio, D. & Levshin, A. (2007). Effects of leading-edge protuberances on
airfoil performance. AIAA Journal, 45(11), 2634–2642.
Karthikeyan, N., Sudhakar, S. & Suriyanarayanan, P. (2014). Experimental studies on the effect of lead-
ing edge tubercles on laminar separation bubble. Paper presented at the 52nd Aerospace Sciences Meet-
ing, AIAA SciTech 2014, 13 Jan 2014, National Harbor, MD. doi:10.2514/6.2014–1279.
Lohry, M.W., Clifton, D. & Martinelli, L. (2012). Characterization and design of tubercle leading-edge
wings. Paper presented at the Seventh International Conference on Computational Fluid Dynamics
(ICCFD7–4302), Big Island, Hawaii.
Miklosovic, D.S., Murray, M.M., Howle, L.E. & Fish, F.E. (2004). Leading edge tubercles delay stall on
humpback whale flippers. Physics of Fluids, 16(5), L39–L42.
Rostamzadeh, N., Hansen, K.L., Kelso, R.M. & Dally, B.B. (2014). The formation mechanism and
impact of stream wise vortices on NACA 0021 airfoil’s performance with undulating leading edge
modification. Physics of Fluids, 26, 107101.
Skillen, A., Revell, A., Pinelli, A., Piomelli, U. & Favier, J. (2015). Flow over a wing with leading edge
undulations. AIAA Journal, 53(2), 464–472.
Watts, P. & Fish, F.E. (2001). The influence of passive leading edge tubercles on wing performance.
In Proceedings 12th UUST, Durham, NH, August 2001. Lee, NH: Autonomous Undersea Systems
Institute.
Zhang, M.M., Wang, G.F. & Xu, J.Z. (2013). Aerodynamic control of low-Reynolds number airfoil with
leading-edge protuberance. AIAA Journal, 51(8), 1960–1971.
314
ABSTRACT: The present investigation focuses on structure and shape modification for a
High Altitude Long Endurance (HALE) Unmanned Aerial Vehicle (UAV). Aerodynamics and
performance parameters are defined during the conceptual design. Accordingly, the fuselage
shape is designed by NACA 2410 (low – Reynolds number) and the wing modification is car-
ried out using Gurney flaps and winglets. The objectives of this investigation are fully stated
as a consequence of the design, and the wind tunnel model is briefly described. This project
intended to get some coherent results between theoretical calculations and both the experimental
and Computational Fluid Dynamics (CFD) results. Two methods for calculating the pressure
coefficient are used, one is experimental low speed subsonic wind tunnel testing and the other
is a numerical solution from CFD ANSYS R15.0. The Spallart-Allmaras turbulence model was
used for solution initialization. The lift and drag coefficients of the airfoil at different angles of
attack were observed for both computational and experimental study. The pressure and velocity
distribution were obtained using ANSYS R15.0. The aerodynamic characteristics of the UAV
model have been carried out at different angles of attack. The aerodynamic parameters, such
as the lift coefficient, drag coefficient and (L/D) ratio are improvised by optimizing the design.
1 INTRODUCTION
An Unmanned Aerial Vehicle (UAV) is a flying machine without a human pilot on board. The
flight of UAVs as a rule work with different degrees of self-rule. Development of an advanced
unmanned aerial vehicle (UAV) for civil and military applications has driven the development of
modern aviation. In this study, a structure and shape modification procedure for a high altitude
long endurance (HALE) unmanned aerial vehicle (UAV) is presented. The improvement and
utilization of cutting edge remote detection of UAV, for example, HALE, that could more suit-
ably address the necessities of the crisis administration group, requires extensive consideration
and assessment. State-of-the-art craftsmanship in HALE stage innovation is exemplified by the
US Air Force’s Global Hawk Unmanned Endurance Air Vehicle, which is in framework trials, is
intended to go for completely independently operation (Goraj et al., 2004) with a most extreme
flight continuance of 40 hours. Worldwide Hawk could fly up to 3,000 nautical miles (5,556 km)
at up to 67,000 ft. (20, 422 m), saunter over an objective territory for 24 hours while utilizing sen-
sors or other hardware, and afterwards come back to base, all without human guidance.
The Gurney flaps (Zerihan & Zhang, 2001) are a simple device consisting of a short strip
fitted perpendicular to the pressure surface along the trailing edge of a wing. With a typical
size of 1–15% of the wing chord (Bechert et al., 2000), it can exert a significant effect on the
lift (down force), with a small change in the stalling incidence, leading to a higher CL max, as
documented by (Liebeck, 1978). Although the device was named after Gurney in the 1960s,
mechanically similar devices were employed earlier, for example, by Gruschwitz, Schrenk and
Duddy. It is anything but difficult to outline an air ship on the off chance that have infor-
mation’s about officially existing flying machines of comparative sort. It gives more fulfill-
ments and maintains a strategic distance from perplexity while picking some plan parameters
for our flying machine (Parezanovic et al., 2005). In this definite overview some numerous
315
2 METHODOLOGY
∂p
+ ∆. ( ρ u ) = 0
∂t
316
The lift and drag coefficients of the airfoil at different angles of attack were observed for
both computational and experimental study. Data is taken by varying the angle of attack and
by keeping the free stream velocity constant for both tests.
317
to visualize its effect at various levels. The drag variation is represented in the polar diagram
in which CL is plotted against CD. In this plot, lift coefficient increases until a particular angle
of attack after which it drops due to subsequent increase in the drag coefficient. The graph
between pressure coefficient and location of pressure tapings is plotted for both experimental
as well as computational results. The obtained results are compared and validated. It is found
that the optimized lift coefficient and the stall region of the designed HALE UAV is similar
in both experimental and CFD results.
318
The primary goal of this paper is to investigate the aerodynamic characteristics of HALE
unmanned aerial vehicle (UAV) in order to make a UAV with optimized aerodynamic per-
formances. To that end, it explains how improvisation of aerodynamic parameters such as
coefficient of lift (CL) and coefficient of drag (CD) is done. The coefficient of pressure (CP)
for the model is calculated from the experimental results. The report concludes the result by
using the methodology that has been implemented in it. The desired result is achieved by
modifying the shape of the fuselage. The presence of a Gurney flap on the wing served to
increase the lift generated on the body. The presence of winglets makes the flow laminar by
preventing the creation of vortices. The induced drag has been counterattacked by using the
winglets and the vortex drag has been counterattacked by Gurney flaps. For three dimen-
sional analysis, favorable results were obtained with the ANSYS, FLUENT software. The CL
Vs. α and CD Vs. α graphs give the satisfactory results with values of CL and CD. Different
data is obtained by keeping the free stream velocity constant and by varying the angle of
attack for both the experimental and investigational study. Since HALE UAVs fly at high
altitude, and in low density conditions at lower speeds, it is difficult to locate the transition
point. This problem is overcome by using a low Reynolds number airfoil shaped fuselage
structure. Manometric readings are obtained using a subsonic wind tunnel at a constant
velocity V = 25 m/s. From the manometric readings, free stream static pressure and total
pressures were calculated. These manometric readings are calculated with the aid of pressure
tapings. Computational investigations have been performed to examine the effectiveness of
the aerodynamic parameters such as CL, CD and CP for the HALE UAV. The stall angle for
the designed HALE UAV is 28°. The coefficient of lift is found to be maximum at the angle
of attack α = 16°. Experimental results compared with computational simulations are next
in relation to the aerodynamic coefficients. The pressure contour at stall angle of attack is
found to be maximum at the pressure side and minimum at the suction side. The velocity
contour and pressure distribution exhibits the same result. Finally, the result concludes that
the aerodynamic parameter CP has been efficient for experimental and computational results.
This project achieved some coherent results between theoretical calculations and both the
experimental and CFD results. This analysis yields better results for a HALE UAV and can
be implemented on future UAV projects to get the optimized coefficient of lift (CL) value.
Using this design optimization, the HALE UAVs will be able to achieve higher ranges and
longer endurance.
REFERENCES
Bechert, D.W., Meyer, R. & Hage, W. (2000). Drag reduction of air-foils with mini flaps—AIAA. Berlin,
Germany.
Goraj, Z., Frydrychewicz, A., Ašwitkiewicz, R., Hernik, B., Gadomski, J., Goetzendorf-Grabowski,
T., Figat, M. & Chajec, W. (2004). High altitude long endurance unmanned aerial vehicle of a new
generation-a design challenge for a low cost, reliable and high performance aircraft. Bulletin of the
Polish Academy of Sciences Technical Sciences, 52(3), 173–194.
Hendrick, P., Verstraete, D. & Coatanea, M. (2008). Preliminary design of a joined wing HALE UAV.
26th International Congress of the Aeronautical Sciences, (ICAS2008).
Jahangir Alam, G.M., Md. Mamun, Md Abu, T.A, Md. Quamrul Islam, Md. & Sadrul Islam, A.K.M.
(2013). Investigation of the aerodynamic characteristics of an airfoil shaped fuselage UAV model,
International Conference on Mechanical Engineering.
Liebeck, R.H. (1978) Design of subsonic air-foils for high lift. Journal of Aircraft, 15(9), 547–561.
Parezanovic, V., Rasuo, B. & Adzic, M. (2005). Design Airfoils for wind turbine blades. Research
Gate Publications. Retrieved from https://www.researchgate.net/publication/228608628_
DESIGN_OF_AIRFOILS_FOR_WIND_TURBINE_BLADES.
Zerihan, J., & Zhang, X. (2001). Aerodynamics of Gurney flaps on a wing inground effect, AIAA
JOURNAL, 39(5), 772–780.
319
1 INTRODUCTION
Traffic signboards provide important information, directions and warnings on the road; they
are designed and placed as assistance to drivers. They keep traffic flowing freely by helping
drivers reach their destinations and letting them know entry, exit, and turn points in advance.
Pre-informed drivers will naturally avoid committing mistakes or taking abrupt turns and
causing bottlenecks. Comprehension of traffic signboards is crucial to safety, but they are not
always detected or recognized correctly. Signboards present issues in terms of detection and
recognition due to poor visibility, bad weather conditions, the color combinations used, their
height and position, vehicle speed, and driver’s age and vision.
Usability indicates ease or convenience of use. In this study, we consider signboard usabil-
ity issues. The objective of this paper is to assess the usability problems of signboards faced
by drivers and to study the effectiveness of signboards in Kottayam.
2 LITERATURE REVIEW
Numerous studies have indicated that regulatory and warning signs help to improve the flow
of traffic, reduce accidents and ensure that pedestrians can safely use designated crosswalks.
Traffic sign usability is influenced by numerous factors.
323
3 METHODOLOGY
The significant factors that affect usability have been determined from the literature survey. A
preliminary observational field study was carried out on three 15-kilometer stretches of road
in the Kottayam district of Kerala state in India. The field study provided an opportunity
to comprehend the signboard types and the factors highlighted in the literature that affect
their usability. Figures 1 and 2 show some of the signboards on the roads observed. Subse-
quently, a brainstorming session was undertaken with four student project members and two
experts to focus on the road signboards’ attributes and their usability. Figure 3 shows a cause
and effect diagram for the poor usability of signboards according to these brainstorming
sessions. Next, a four-part questionnaire was prepared and a pilot survey was conducted in
Kottayam district. Convenience sampling was used for the survey. Where possible, respond-
ents were approached individually using hard copy; others were surveyed using an online
method. A total of 236 responses were obtained. These were filtered for missing values and,
subsequently, descriptive statistics were used for data analysis.
The cause and effect diagram (Figure 3) highlights four components that affect usability,
that is, driver, signboard, road, and environment. Driver factors include age, gender, experi-
ence, familiarity with signs, familiarity with roads, and driver behavior; road factors include
Figure 1. Signboards with two languages, color combination, variable font size, and graphics.
324
speed of vehicle, type of road, traffic conditions, road infrastructure, and advertisements;
signboard design attributes include shape, size, color, content, position, and condition; envi-
ronmental factors include weather, light, and monitoring system. Numerous studies have
provided evidence of the differing impacts of various factors on traffic sign effectiveness.
For example, sign layout, shape, and familiarity would improve driver comprehension and
hence driver response, while damaged signs, obstructing vegetation, and advertisements
would make signs unusable for the driver. Higher speed, driver age, and traffic volumes would
impair sign usability as a result of the shorter driver response times necessitated by them.
4 RESULTS
Kottayam contains 406 km of state highways and a total road length of 3,449 km. The three
main types of signs, that is, mandatory signs, cautionary signs, and information signs, have
been used regularly along the roads (see, for example, Figures 1 and 2). The languages used
are predominantly Malayalam and English. The color combination for the text on informa-
tion signs is white text on a green background.
The sample was relatively young (mean = 25.19 years; SD = 8.863 years) with 91% male
respondents. A proportion (50%) of respondents used two-wheelers. A majority of drivers
(58%) reported that they do not use signboards or only use them sometimes (36%). The
reasons for non-usage or low usage could be attributed to operational factors, environmental
factors or human factors. Some of these aspects are examined in the following sections.
325
326
4.3 Discussion
Numerous factors relating to driver, road, signboard, and environment affect the use of traf-
fic signs (see Figure 3). Readability, font size, language, position, long-distance visibility, and
multi-sign configuration are prominent attributes that affect sign usability. Advertisements,
greenery, buildings, and high speed are road factors that reduce traffic sign usability. Driver
factors such as driver age and eyesight are perceived to have a negative effect, but road famili-
arity and driver experience are perceived to have a positive effect on traffic sign usability. The
environmental factors of weather, light, and police presence affect traffic sign usability. For
better use and compliance with road signage there is a need to redesign traffic signs. A user-
centered design of traffic signs that considers driver, road, signboard, and environmental fac-
tors would enable better comprehension of the signs, their more effective usability, and better
driver responses. Ultimately, this would lead to better traffic flow and safety performance.
Traffic signs could be made better by using ergonomic principles (Jamson & Mrozek, 2017).
From the survey carried out, 32% of the respondents felt that an on-board recognition
system could improve sign usability. A good traffic sign could help in better design of intelli-
gent driving-aid systems (Amditis et al., 2010; Wen et al., 2016). The respondents emphasized
the use of reflective signs (30%) and improved road infrastructure (20%) for enhanced traffic
sign usability.
We make the following specific suggestions:
– Signboards should be cleaned and maintained every six months.
– The font size, line spacing and sign size should be as per the Indian standards of motor
vehicle legislation.
– Focus on better content layout in signs for better readability.
– Position and align signboards to maximize long-distance visibility.
– Remove greenery regularly from the lines of sight of signs.
– Provide for night-time sign visibility through reflective signs or night lighting.
– Examine the possibility of providing user-compatible in-vehicle traffic sign devices that
help to address human age or eyesight issues.
– Increase monitoring efforts for traffic sign compliance.
5 CONCLUSION
The road traffic sign usability issues of drivers in Kottayam district were studied. A majority
of the drivers surveyed (58%) do not use the signboards or only use them sometimes (36%).
327
ACKNOWLEDGMENTS
We thank Mrs. Asha Lakshmi, Ms. Harsha Surendranand, Dr. Vinay V. Panikar, Dr. Basha
S.A, and students of M. Tech 2016 for helping us carry out this study.
REFERENCES
Amditis, A., Pagle, K., Joshi, S. & Bekiaris, E. (2010). Driver–Vehicle–environment monitoring
for on-board driver support systems: Lessons learned from design and implementation. Applied
Ergonomics, 41(2), 225–235.
Ben-Bassat, T. & Shinar, D. (2015). The effect of context and drivers’ age on highway traffic signs com-
prehension. Transportation Research Part F: Traffic Psychology and Behaviour, 33, 117–127.
Cristea, M. & Delhomme, P. (2015). Factors influencing drivers’ reading and comprehension of on-
board traffic messages. European Review of Applied Psychology, 65(5), 211–219.
Di Stasi, L.L., Megías, A., Cándido, A., Maldonado, A. & Catena, A. (2012). Congruent visual infor-
mation improves traffic signage. Transportation Research Part F: Traffic Psychology and Behaviour,
15(4), 438–444.
Domenichini, L., La Torre, F., Branzi, V. & Nocentini, A. (2017). Speed behaviour in work zone crosso-
vers. A driving simulator study. Accident Analysis & Prevention, 98, 10–24.
Jamson, S. & Mrozek, M. (2017). Is three the magic number? The role of ergonomic principles in cross
country comprehension of road traffic signs. Ergonomics, 60(7), 1024–1031.
Kazemi, M., Rahimi, A.M. & Roshankhah, S. (2016). Impact assessment of effective parameters on
drivers’ attention level to urban traffic signs. Journal of the Institution of Engineers (India): Series
A, 97(1), 63–69.
Khalilikhah, M. & Heaslip, K. (2016). The effects of damage on sign visibility: An assist in traffic sign
replacement. Journal of Traffic and Transportation Engineering, 3(6), 571–581.
Ng, A.W. & Chan, A.H. (2008). The effects of driver factors and sign design features on the comprehen-
sibility of traffic signs. Journal of Safety Research, 39(3), 321–328.
Ou, Y.K. & Liu, Y.C. (2012). Effects of sign design features and training on comprehension of traffic signs
in Taiwanese and Vietnamese user groups. International Journal of Industrial Ergonomics, 42(1), 1–7.
Shinar, D. & Vogelzang, M. (2013). Comprehension of traffic signs with symbolic versus text displays.
Traffic & Transportation Research, 44(1), 3–11.
Wen, C., Li, J., Luo, H., Yu, Y., Cai, Z., Wang, H. & Wang, C. (2016). Spatial-related traffic sign inspec-
tion for inventory purposes using mobile laser scanning data. IEEE Transactions on Intelligent
Transportation Systems, 17(1), 27–37.
Yuan, L., Ma, Y.F., Lei, Z.Y. & Xu, P. (2014). Driver’s comprehension and improvement of warning
signs. Advances in Mechanical Engineering, 6, 582–606.
328
1 INTRODUCTION
The production industries generally involve various processes such as design, manufacturing,
marketing. And the manufacturing process consists of operations like, welding, milling etc.
In reality there can be numerous of operations to be carried out on a raw material before it
comes out as a final product. Many times other materials were added to or removed from
the original material, some totally different in nature. Each of this operations consume dif-
ferent processing times. In addition to that, each of these operations need separate machines
to perform each action. Hence, in the shop floor each machine is used for a particular job
at a time. Once a job is done, the setup on the machine needs to be changed to do another
job. There will be some time required to change the setup done for doing one job to another
job. The setup times are invariably involved in all scheduling situations. However, they are
added to processing times in many of the situations. Certainly, this procedure will reduce the
complexity of the problem solved. On the other hand it will affect the quality of solutions
obtained. Hence, there is need for explicit consideration setup time, which is addressed in the
present study.
In shop floor there are many configurations such as single machine, parallel machine, flow
shop, job shop etc. However, real manufacturing situations encounter numerous variations
of these basic shop configurations. It is observed that more than one-third of production sys-
tems follow flow shop configuration (Foote and Murty, 2009). A flow shop is characterised
by the flow of work that is unidirectional i.e., there are n jobs to be processed on m machines.
The order jobs are processed on machines is assumed to be same i.e., permutation flow shop.
If there are n jobs, there are n! total number of solutions. The present research considers a
realistic variation of the general flow shop, i.e. a flow shop operating in a sequence depend-
ent setup time environment. When the setup time is added, the complexity of the problem
becomes NP complete in nature (Jatinder & Gupta, 1986).
The flow shop scheduling problems are widely used in industry. Reducing the makespan
is the main objective that is desired in most of the situations. Exact solution methodologies
329
2 LITERATURE REVIEW
The flow shop scheduling problems have been an intense subject of study over past few dec-
ades. The literature review carried out in the present paper focuses on the works done on flow
shop scheduling with and without set up times for jobs. The earlier works are done without
taking the setup times into consideration. Both constructive and improvement heuristics are
worked on it quite well. It can be seen that make span minimization is given the premium
importance in all of them. For practical size problems, the researchers have used either con-
structive or improvement heuristics.
Hence the literature review can be divided into two sections.
4 SOLUTION METHODOLOGY
The present study proposes a novel variation of neighbourhood search for scheduling SDST
flow shop. Generally, all neighbourhood search procedures use only one type of neighbour-
hood. In order to provide more intensity to the search procedure two different types of neigh-
bourhood are searched in the same local search. These are swap and insertion neighbourhoods.
In the swap neighbourhood, two job numbers are randomly generated and their positions are
mutually exchanged. In the insertion neighbourhood procedure, a job number is randomly
generated. The randomly generated job is inserted in the random position generated. The
advantage of this procedure is that both these neighbourhoods generate mutually exclusive
set of solution sequences, which result in intensified search. The proposed heuristic method
involves local and global search parameters which are optimized to make it a robust algorithm.
The working procedure of the algorithm is as shown.
331
The makespan results are obtained for all the 1080 problem instances. They are compared
with results of Genetic algorithm obtained from Vanchipura R and Sridharan R. The vari-
ous analysis methods used are Graphical Analysis, Relative performance index analysis and
Statistical analysis. It is found that the Robust Neighbourhood search performs superior to
GA for all problem instances and as the problem size increases in terms of both setup times
and number of jobs, the difference become more and more evident.
RPI = (RNS – GA/GA)
For the two sets of 200 jobs problems, ie the 200*10 and 200*20 the RPI values are respec-
tively −0.02706 and −0.044. The RPI value for 500*20 problem is −0.06411. These results
clearly shows that RNS performs better than GA.
Out of this 100% setup time level is taken as the base case and graphs are plotted. There
are 12 graphs corresponding to 12 different size problems at 100%. Due to space limitations,
5 graphs are shown (Figs. 1 to 2) corresponding to 20*10, 50*10, 100*10, 200*10 and 500*20
size problems.
332
The result clearly shows that the new algorithm, ie the Robust Neighbourhood Search
performs better than GA for all problem instances.
6 CONCLUSION
The proposed algorithm, namely Robust Neighbourhood Search, is tested for altogether 1080
problem instances with a varying number of jobs, machines and setup times. The parameters
of the heuristics are optimized using design of experiment methodology. Further, experimen-
tations were carried out and makespan results were obtained. Three stages of analyses; graphi-
cal analysis, relative performance index analysis and statistical analysis are performed on the
results obtained. In the analyses, the proposed heuristic is compared with GA and are found
superior.
The main advantage of RNS is that the algorithm is very flexible and robust. It gives
improved result in successive iterations without getting trapped in local optima. The other
advantage is that the search techniques can be easily changed so as to adapt for real life situ-
ations. The local and global search parameters can also be changed. The statistical analysis
reveals that for some problem groups, the proposed algorithm is found to be not superior in
spite of the better makespan results obtained, which can be identified as shortcoming of the
proposed heuristic.
REFERENCES
Jatinder N.D & Gupta. 1986. The two machine sequence dependent flow-shop scheduling problem.
European Journal of Operational Research. 24(3): 439–446.
Laha D & Chakraborty UK .2009. A constructive heuristic for minimizing makespan in no-wait flow
shop scheduling. Int. j. Adv. Manuf. Technol. 41(1–2): 97–109.
Liu B. 2007. An effective PSO based algorithm for scheduling. IEEE 37(1): 45–52.
Malakooti B, Kim H & Shikh S .2011. Bat intelligence search with application to multi objective mul-
tiprocessor scheduling optimization. International journal for advanced manufacturing technology.
60(9): 1071–1086.
Marichelvan M. 2002. An improved hybrid Cuckoo Search (IHCS) meta heuristic algorithm for permu-
tation flow shop scheduling problems. International journal for bio inspired computation 4(4): 116–128.
Parthasarathy S, Rajendran CA .1997. A simulated annealing heuristic for scheduling to minimise
weighted tardiness in a flow shop with sequence dependent setup times of jobs—a case study, Produ
Plan Control 8(5): 475–483.
333
334
ABSTRACT: Knowledge Management (KM) is vital in the case of the construction indus-
try because of its impact on integrating knowledge within and outside the industry. Knowl-
edge management implementation strategies play an important role in increasing project
performance. The need to reduce Cost of Quality (CoQ) is clear, but the effect of KM in low-
ering CoQ is uncertain. This paper reviews the factors contributing to CoQ and the effect of
KM on this. A literature survey and qualitative inquiries were adopted to address the research
aims. Organizations should understand that there is a native CoQ problem that needs to be
addressed. The main factors contributing to CoQ are design changes, errors and omissions,
and poor skills. Here, the logic is based on the desire to reduce CoQ and the need to tackle as
well as integrate knowledge across personal, project, organizational, and industry boundaries.
Knowledge management was found to have a positive impact on lowering the cost of quality.
1 INTRODUCTION
2 LITERATURE REVIEW
2.1 Knowledge
Knowledge can be defined as the theoretical or practical understanding of a subject. A grouping
of information, background, and understanding can be captured, utilized and shared for business
purposes (Wibowo & Waluyo, 2015). Explicit knowledge is solitary that is able to be calculated,
taken into custody, examined, and can effortlessly be passed onto others in a codified layout. It
is that type of knowledge that can be uttered in statements of words and numbers. It can be sup-
plementary, transferred, dispersed and transformed in a methodical and prescribed way into facts
(Wibowo & Waluyo, 2015). Tacit knowledge, on the other hand, comes from one’s experience.
It can be considered as human knowledge that can be an intuition, finding, talent, experience, a
form of body language, or a value or belief. It is highly personal and context-specific, which is very
complicated to create, exchange in a few words, or distribute onwards to a community.
335
3 RESEARCH METHODOLOGY
This paper reviews the factors contributing to the cost of quality and the effect of knowledge
management on the cost of quality. A literature survey and qualitative inquiries were adopted
to address the research aims.
The main factors contributing to the cost of quality are established using a literature survey.
Qualitative inquiries are used to identify the effect of knowledge management on the cost of qual-
ity. These are carried out using semi-structured interviews with open-ended questionnaires. For
the research, ten experts (see Table 1) were selected from construction companies across Thiru-
vananthapuram with knowledge management strategies. The questionnaire has three sections.
The first consists of general information, the second deals with the factors contributing to the
cost of quality, and the third section inquires into the effect of knowledge management on CoQ.
Ten respondents were asked to rate the intensity of effect of KM processes on the compo-
nent elements of CoQ, that is, design changes, poor skills, and errors and omissions, based
on their own experience on construction projects. A four-point Likert scale was used to rate
the effect as follows: 1 = Strong Negative Impact; 2 = Negative Impact; 3 = Positive Impact;
4 = Strong Positive Impact. The mean values of the ratings were considered and the process
with most impact on CoQ in practice is determined.
Most of the respondents rated the KM processes as having a positive impact or strong
positive impact, that is, as 3 and 4. The data collected were analyzed and the leading five
A 36 1 1
B 27 1 1,2
C 24 1,2 1,2,3
D 12 1 1,2
E 14 1 1,2
F 12 1 1
G 10 1 1
H 13 1,2 1,2
I 12 1,2 1,2
J 14 1 1
336
KM processes that affect cost of quality were found by taking the mean of the values. This is
captured in Tables 2, 3 and 4, and also represented in the form of pie charts (see Figures 1, 2,
and 3). In the first case, that is, for the effect of KM processes on omissions and errors, the
responses obtained from the ten interviewees were 3, 3, 3, 3, 4, 4, 4, 4, 4 and 3. The average
value was 3.5. All other means were similarly calculated.
337
The cost of quality mainly involves design changes, errors and omissions, and poor skills. The
major KM processes reported to affect errors and omissions are knowledge champions, knowl-
edge capture, knowledge creation, knowledge transfer, and knowledge sharing. Those affecting
design changes are knowledge sharing, knowledge creation, knowledge capture, and knowledge
dissemination. On the other hand, poor skills are mainly affected by knowledge identification,
knowledge transfer, knowledge capture, knowledge champions, and knowledge creation.
The main contributing factors to the cost of quality are errors and omissions, design changes,
and poor skills. These factors are summarized in Tables 5, 6 and 7.
According to the results, knowledge management has most impact in reducing errors
and omissions. The unifying process across all three CoQ contributing factors is knowledge
capture.
338
7 CONCLUSION
The main aims of this paper were to determine the factors contributing to the cost of qual-
ity and the effect of knowledge management on CoQ. It was found that the main factors
contributing to CoQ are errors and omissions, design changes, and poor skills. From the pie
charts, it is clear that KM processes such as knowledge sharing, knowledge creation, knowl-
edge capture, knowledge dissemination, knowledge transfer, and knowledge champions have
339
REFERENCES
Furcada, N. & Marcarulla, M. (2013). Knowledge management perceptions in construction and design
companies. Automation in Construction, 29, 83–91.
Gromoff, A., Kazantsev, N. & Bilinkis, J. (2016). An approach to knowledge management in construc-
tion service-oriented architecture. Procedia Computer Science, 96, 1179–1185.
Nonaka, I. & Konno, N. (1998). The concept of “Ba”: Building a foundation for knowledge creation.
California Management Review, 40(3), 40–54.
Wibowo, M.A. & Waluyo, R. (2015). Knowledge management maturity in construction companies.
Procedia Engineering, 125, 89–94.
340
1 INTRODUCTION
Generally, maintenance involves the set of activities carried out in the industry to make sure
all the machineries and other physical assets are available for production. The main purpose
of industrial maintenance is to achieve minimum breakdown and to maintain efficiency of
the production facilities at the lowest possible cost. Maintenance activities and its execution
depend on the manufacturing system and layout of the plant. However, in any case, main-
tenance should not be considered as a cost-centric activity, but a profit-generating function
(Alsyouf 2007).
Maintenance helps in adding value to the organization through better utilization of pro-
duction facilities, enhancing product quality as well as reducing rework and scrap. ISO 55000:
2014 Asset Management System upholds, “Assets exist to provide value to the organization
and its stakeholders”. Unfortunately many companies still consider maintenance activities
as a “necessary evil”, due to the blurred perception about its role in attaining company’s
objectives and goals (Duffuaa et al. 2002). For those companies, the first step is to change
the corporate mindset such that the role of maintenance in achieving customer oriented per-
formance parameters such as quality, on-time delivery, etc., is significant.
Unexpected failures affect three key elements of competitiveness—quality, cost, and pro-
ductivity. In the modern world, all firms are striving hard to elevate these key features to
develop a strategic advantage against their competitors. Simply, waiting for the failure to
occur is not affordable in today’s business operations scene. Hence, companies have to adopt
different maintenance strategies suitable for their businesses.
The concept of Industry 4.0 originated from Germany, but its vision has caught the
attention of organizations across the globe (Zezulka et al. 2016). Industry 4.0 has spawned
a new wave of technology revolutionizing manufacturing environment through “smart
factory” in which machines cooperate with humans in real time via the cyber-physical
341
2 META-ANALYSIS OF LITERATURE
A comprehensive amount of research papers have been published in the area of plant main-
tenance. It would be non-exhaustive process to review all the papers in the literature. The Web
of Science (WoS) database (after proper refinement) suggests 12,576 articles connected with
the keywords-Preventive Maintenance, Predictive Maintenance, Corrective Maintenance and
Reliability Centered Maintenance as on October, 2017. The collection includes 7,736 journal
articles, 4,744 conference proceedings and 609 reviews.
This huge volume of articles makes it difficult to provide a simple narration and critical
review of each research article. To overcome this common issue in conducting a literature
review, Glass et al. (1981) introduced a new approach known as Meta-analysis. Meta-analysis
is a methodology used to integrate research findings from a large body of articles using sta-
tistical analysis and sophisticated measurement techniques (Krishnaswamy et al. 2007). This
methodology is widely accepted in the research community to obtain firsthand information
from a large pool of articles from which the current direction of research can be projected
and significant articles in the domain can be shortlisted for further study.
To achieve the objective of this review paper, statistical techniques are employed to the
data retrieved from a pool of research papers extracted by data mining using BibExcel soft-
ware tool. Bibexcel is used to carry out bibliographic and statistical analysis by extracting
the data of textual nature such as title of the paper, author names, journal name, keywords,
etc. This free software tool also allows modifying and/or adjusting data that can be imported
from various databases including Scopus, WoS, Mendeley among others (Fahimnia et al.
2015).
Initially, a WoS outfile is created in plain text format that contains relevant information of
top 250 cited research papers in the maintenance domain, to be used as input for BibExcel.
The result of the analysis discloses the major contributors to the domain, major journals
publishing top quality articles, tools used in maintenance studies, etc.
342
343
3 MAINTENANCE STRATEGIES
In general, maintenance strategies can be generally classified into two categories: Reac-
tive and Proactive. Reactive maintenance focuses on repairing an asset once failure occurs.
Proactive maintenance focuses on avoiding repairs and asset failure through preventive and
predictive methods. These strategies meet the objectives laid out in the philosophy of Total
Productive Maintenance (TPM).
345
Predictive Maintenance is the current excitement in the field of maintenance. This strategy is
based on maintenance predictive models working on the principle of machine learning. When
building predictive models, we use historical data to train the model which can then recognize
hidden patterns and further identify these patterns in the future data. These models are trained
with examples described by their features and the target of prediction. The trained model is
expected to make predictions on the target by analyzing the features of the new examples. It is
crucial that the model captures the relationship between features and the target of prediction.
In order to train an effective machine learning model, we need training data which includes
features that actually have predictive power towards the target of prediction.
Therefore, quantitative data is of foremost importance for training the model. Since fail-
ures are very rare to occur and the availability of such data is difficult in real time, there
are a few data repositories like Prognostics and Health Management (PHM) that facilitate
the availability of run-to-failure data to encourage development of prognostic algorithms.
Saxena et al. (2008) have described how damage propagation can be modeled in various
modules of aircraft gas turbine engines for developing and testing prognostics algorithms.
This paper also presents an evaluation policy for performance benchmarking of different
prognostics algorithms.
In Prognostics, the objective is to predict the Remaining Useful Life (RUL) before a failure
occurs, given the current machine condition and past operation profile. Since the dataset
was made available by PHM in 2008, researchers have built different prognostics methods.
Ramasso & Saxena (2014) reviewed different approaches to PHM dataset and analyzed to
understand why some approaches worked better than others. This paper presents three top
winning approaches described in the order of their rank as follows:
346
The previous discussion reveals that although the application of PdM is more beneficial com-
pared to other strategies from a practical point of view, further research on PdM is necessary
in order to realize the concept of Industry 4.0. The application of PdM is more complex and
expensive because PdM heavily relies on the implementation of complex sensor systems to
monitor the health conditions of the equipment in real time. As discussed, research based
on data repository provided by Prognostic and Health Management (PHM) seems the right
direction for future research on condition based maintenance of sensitive and complex equip-
ment/systems. As observed from the literature, around 70 publications have been found that
have utilized the PHM dataset for the development of prognostic algorithms. However, the
PHM dataset exhibits an exponential degradation pattern. No comparison has been made to
check whether the same prognostic algorithms can perform effectively on different datasets.
6 CONCLUSION
This paper presented a systematic review of different maintenance strategies. Although many
papers have been published in this area, only a few papers present the advantages and chal-
lenges associated with implementing each maintenance strategy. This paper identifies cer-
tain ‘core’ articles which may prove beneficial for the people seeking to research the area.
We identified some of the recent path-breaking research papers that contribute towards the
advancements in maintaining production facilities and other complex machinery to achieve
efficiency and effectiveness.
Furthermore, it can be seen that there have been tremendous excitements all over the world
to implement predictive maintenance strategy for monitoring and executing maintenance
activities. This motivation is associated with the advancements in the field of systems engi-
neering especially in machine learning, data analytics, and instrumentation technologies.
Hence, PdM can be seen as the most promising maintenance strategy for realizing the objec-
tives of Industry 4.0. However, a single maintenance strategy cannot be the most economical
strategy for all the equipment in the plant. Especially for non-critical items, it is always better
to follow breakdown maintenance. Therefore, it can be concluded that an optimum mix of
the above strategies such as RCM, with more emphasis on predictive maintenance, is the
most suited maintenance strategy in the emerging industrial scenario.
347
Alsyouf, I. 2007. The role of maintenance in improving companies’ productivity and profitability. Int.
J. Prod. Eco. 105(1): 70–78.
Arunraj, N.S. & Maiti, J. 2010. Risk-based maintenance policy selection using AHP and goal program-
ming. Safety Science, 48(2): 238–247.
Bucknam, J.S. 2017. Data analysis and processing techniques for remaining useful life estimations.
Carretero, J., Pérez, J.M., Garc a-Carballeira, F., Calderón, A., Fernández, J., Garc a, J.D., Lozano, A.,
Cardona, L., Cotaina, N. & Prete, P., 2003. Applying RCM in large scale systems: a case study with
railway networks. Relia. Engng. & System Safety 82(3): 257–273.
Duffuaa, S.O., Al-Ghamdi, A.H. & Al-Amer, A. 2002. Quality function deployment in maintenance
work planning process. In Proc. of the 6th Saudi engineering conference KFUPM, Dhahran, Kingdom
of Saudi Arabia.
Efthymiou, K., Papakostas, N., Mourtzis, D. & Chryssolouris, G. 2012. On a predictive maintenance
platform for production systems. Procedia CIRP, 3: 221–226.
Fagogenis, G., Flynn, D. & Lane, D., 2014. Novel RUL prediction of assets based on the integration of
auto-regressive models and an RUS Boost classifier. Prognostics and Health Management (PHM),
2014 IEEE Conference: 1–6.
Fahimnia, B., Sarkis, J. & Davarzani, H. 2015. Green supply chain management: A review and biblio-
metric analysis. Int. J. Prod. Eco.162: 101–114.
Glass, G.V., McGaw, B., Smith, M.L. 1981. Meta Analysis in Social research. Beverly Hills, CA: Sage
Publications, New York, NY.
Heimes, F.O., 2008. Recurrent neural networks for remaining useful life estimation. In Prognostics and
Health Management (PHM), 2008 IEEE Conference: 1–6.
Jeong, I.J., Leon, V.J. & Villalobos, J.R. 2007. Integrated decision-support system for diagnosis, mainte-
nance planning, and scheduling of manufacturing systems. Int. J. Prod. Research, 45(2): 267–285.
Khan, F.I. & Haddara, M.M. 2003. Risk-based maintenance (RBM): a quantitative approach for
maintenance/inspection scheduling and planning. J. of loss prevention in the process industries, 16(6):
561–573.
Kim, N.-H., Joo-Jo, C. & An, D. 2017. Prognostics and Health Management of Electronics.
Krishnaswamy, K.N., Sivakumar, A.I., Mathirajan, M. 2007. Management Research Methodology,
Dorling Kindersley: Pearson India Publications, India.
Kuhn, M. & Johnson, K. 2013. Applied predictive modeling (Vol. 810). New York: Springer.
Lewis, S.A. & Edwards, T.G. 1997. Smart sensors and system health management tools for avionics and
mechanical systems. 16th DASC. AIAA/IEEE Digital Avionics Systems Conference. Reflections to the
Future. Proceedings 2: 8.5–1−8.5–7. doi: 10.1109/DASC.1997.637283.
Nielsen, J.J. & Sørensen, J.D. 2011. On risk-based operation and maintenance of offshore wind turbine
components. Relia. Engng. & System Safety, 96(1): 218–229.
Peel, L. 2008. Data driven prognostics using a Kalman filter ensemble of neural network models. 2008
Int. Conference on Prognostics and Management 1–6, doi: 10.1109/PHM.2008.4711423.
Ramasso, E. & Saxena, A. 2014. Performance Benchmarking and Analysis of Prognostic Methods for
CMAPSS Datasets. Int. J. Prognostics and Health Management 5(2):1–15.
Saxena, A., Goebel, K., Simon, D. & Eklund, N. 2008. Damage Propagation Modeling for Aircraft
Engine Prognostics. Response.
Selvik, J.T. & Aven, T. 2011. A framework for reliability and risk centered maintenance. Relia. Engng.
and System Safety 96(2):324–331.
Si, X.S., Zhang, Z.X. & Hu, C.H., 2017. Data-Driven Remaining Useful Life Prognosis Techniques:
Stochastic Models, Methods and Applications. Springer.
Sørensen, J.D. 2009. Framework for risk-based planning of operation and maintenance for offshore
wind turbines. Wind energy, 12(5): 493–506.
Tupa, J., Simota, J. & Steiner, F. 2017. Aspects of risk management implementation for Industry 4.0,
Procedia Manufacturing 11, 1223–1230.
Vishnu, C.R. & Regikumar, V. 2016. Reliability Based Maintenance Strategy Selection in Process Plants:
A Case Study. Procedia Technology 25:1080–1087.
Wang, T., Yu, J., Siegel, D. & Lee, J., 2008. A similarity-based prognostics approach for remaining useful
life estimation of engineered systems. In Prognostics and Health Management (PHM) 2008. IEEE
Conference:1–6.
Zezulka, F., Marcon, P. Vesely, I. & Sajdl, O. 2016. Industry 4.0 – An Introduction in the phenomenon.
IFAC-Papers Online, 49(25), 8–12.
348
K.S. Athira
Department of Materials Engineering, Indian Institute of Science, Bangalore, Karnataka, India
K.W. Ng
School of Materials Science and Engineering, Nanyang Technological University, Singapore
ABSTRACT: Atherosclerosis causes heart disease and stroke and is a main cause of death
in many countries. Nanoliposomes are a potential candidate for use in drug targeting for its
treatment. The size of the nanoliposomes affects their cellular uptake. An optimal size range
is necessary for effective site-specific drug delivery. Such optimum-sized nanoliposomes were
developed. Their sizes were characterized in this study using dynamic light scattering and
nanoparticle tracking analysis. The sizes of the nanoliposomes were found to be in the ranges
of 79–128 nm via the former method and 86–99 nm via the latter. Liposomes grafted with
polyethylene glycol show an improved stability and increased circulation time, while those
grafted with fluocinolone acetonide help to reduce inflammation. It was observed that nor-
mal nanoliposomes had a larger size than these grafted liposomes, which each had a larger
size than those grafted with both polyethylene glycol and fluocinolone acetonide.
1 INTRODUCTION
351
Four types of nanoliposome were procured from the collaborative research program between
the School of Materials Science and Engineering, Nanyang Technological University,
Singapore, and other international universities. The four types of liposomes were:
a. Normal liposomes (blank);
b. Liposomes grafted with PEG;
c. FA-loaded liposomes;
d. Liposomes grafted with PEG-FA.
352
Representative size data for the four types of nanoliposomes from the DLS measurement is
shown in Figure 1. It can be seen from the figure that the size of the normal (blank) liposome is
approximately 100 nm. Similarly, data were obtained for all the different liposomes in each and
every dilution. The variation in the sizes of the different types of liposomes at varying dilutions
found by the DLS method is shown in Figure 2. The size of the nanoliposomes can be seen to be
in the range of 79–140 nm. The blank liposomes have a size range of 94–128 nm, with an aver-
age of 111 nm. The PEG-coated liposomes have a size range of 96–124 nm, with an average of
110 nm. The FA-loaded liposomes have a size range of 91–97 nm, with an average of 94 nm. The
PEG-FA-incorporated liposomes have a size range of 79–140 nm, with an average of 110 nm.
As the dilution of 10,000X is at the limit of the sensitivity of the DLS equipment, there is a large
amount of error in the data obtained at this dilution. So, excluding this, the average sizes of the
liposomes are plotted in Figure 3. Thus, the size of different types of nanoliposomes ranges from
79 to 128 nm, with an average of 104 nm. The trend of sizes from DLS measurement can be
described as: blank liposomes > PEG liposomes > FA liposomes > PEG-FA liposomes.
Representative size data from the NTA measurement is shown in Figure 4, in which the
concentration of particles of a particular size is plotted. In the given data, most of the parti-
cles have a size of 100 nm. Further, the small peaks of higher sizes can be ignored because of
the very small number of particles in that size range.
The average of the size values thus obtained from the dilutions 10,000X and 100,000X
is shown in Figure 5. The size of the nanoliposomes were found to be in the range of
86–99 nm. The trend of sizes obtained from NTA measurement is: blank liposomes > PEG
liposomes ≈ FA liposomes > PEG-FA liposomes.
Figure 1. Representative data of dynamic light scattering: Intensity of scattered light with size of
nanoliposomes (blank).
353
Figure 3. Average sizes of the different types of nanoliposomes obtained by the dynamic light scat-
tering method.
354
Because NTA is a more accurate method, the overall trend obtained by combining the
results from DLS and NTA measurements can be summarized as: blank liposomes > PEG
liposomes ≈ FA liposomes > PEG-FA liposomes.
Larger liposomes exhibited increased internalization by h-monocytes (Epstein-Barash
et al., 2010), but with a higher degree of apoptosis induction (Takano et al., 2003). A sig-
nificant reduction in activity was also found after treatments with very small liposomes
(55 ± 15 nm) (Epstein-Barash et al., 2010). Hence, a size range between these two extremes,
such as 75–150 nm, would be ideal for a successful site-specific drug delivery.
It has been observed by Nicholas et al. (2000) that the addition of PEG or FA brought the
weight ratio of liposomes down due to permeabilities, reaction temperatures, and phase tran-
sition between the mushroom and brush regimes. This might be the reason for the reduction
in size observed in the liposomes grafted with PEG, FA, and both.
4 CONCLUSIONS
The size characterization study of the different types of nanoliposomes procured from the
collaborative research program between Nanyang Technological University, Singapore, and
other international universities was done using DLS and NTA methods.
The size of the nanoliposomes alone, as well as in forms grafted with different materials,
were found to be in the range of 79–128 nm via the DLS method and 86–99 nm via the NTA
method adopted in the present study.
Because all the samples were monodispersed, both techniques are appropriate and the
results are comparable. The standard deviation in the DLS method was 30.5 and that with
NTA was only 6.5. Hence, the NTA method is adjudged as the better method as it produces
less deviation in the data.
The trend of size variations in the materials used in the current study can be described as:
blank liposomes > PEG liposomes ≈ FA liposomes > PEG-FA liposomes. Thus, the nanoli-
posomes grafted with the two different materials necessary for drug delivery were found to be
better than blank nanoliposomes with respect to size. The PEG-FA grafted nanoliposomes
characterized with a size of 85.5 nm procured from this laboratory are found to be superior
in site-specific drug delivery for atherosclerosis with respect to size.
355
Akbarzadeh, A., Rezaei-Sadabady, R., Davaran, S., Joo, S.W., Zarghami, N., Hanifehpour, Y. &
Nejati-Koshki, K. (2013). Liposome: Classification, preparation, and applications. Nanoscale
Research Letters, 8(1), 102.
Bergstrand, N. (2003). Liposomes for drug delivery: From physico-chemical studies to applications
(Doctoral dissertation, Acta Universitatis Upsaliensis, Sweden).
Epstein-Barash, H., Gutman, D., Markovsky, E., Mishan-Eisenberg, G., Koroukhov, N., Szebeni, J.
& Golomb, G. (2010). Physicochemical parameters affecting liposomal bisphosphonates bioactivity
for restenosis therapy: Internalization, cell inhibition, activation of cytokines and complement, and
mechanism of cell death. Journal of Controlled Release, 146(2), 182–195.
Gerrity, R.G. (1981). The role of the monocyte in atherogenesis: I. Transition of blood-borne mono-
cytes into foam cells in fatty lesions. American Journal of Pathology, 103(2), 181–190.
Harris, J.M., Martin, N.E. & Modi, M. (2001). Pegylation. Clinical Pharmacokinetics, 40(7), 539–551.
Kelly, C., Jefferies, C. & Cryan, S.A. (2010). Targeted liposomal drug delivery to monocytes and macro-
phages. Journal of Drug Delivery, 2011, 727241.
Nicholas, A.R., Scott, M.J., Kennedy, N.I. & Jones, M.N. (2000). Effect of grafted polyethylene glycol
(PEG) on the size, encapsulation efficiency and permeability of vesicles. Biochimica et Biophysica
Acta (BBA) - Biomembranes, 1463(1), 167–178.
Ross, R. (1993). The pathogenesis of atherosclerosis: A perspective for the 1990s. Nature, 362(6423),
801–809.
Santos, N.D., Allen, C., Doppen, A.M., Anantha, M., Cox, K.A., Gallagher, R.C., ... Webb, M.S.
(2007). Influence of poly (ethylene glycol) grafting density and polymer length on liposomes: Relat-
ing plasma circulation lifetimes to protein binding. Biochimica et Biophysica Acta (BBA) - Biomem-
branes, 1768(6), 1367–1377.
Takano, S., Aramaki, Y. & Tsuchiya, S. (2003). Physicochemical properties of liposomes affecting apop-
tosis induced by cationic liposomes in macrophages. Pharmaceutical Research, 20(7), 962–968.
Torchilin, V.P. (2005). Recent advances with liposomes as pharmaceutical carriers. Nature Reviews Drug
Discovery, 4(2), 145–160.
Vafaei, S.Y., Dinarvand, R., Esmaeili, M., Mahjub, R. & Toliyat, T. (2015). Controlled-release drug
delivery system based on fluocinolone acetonide–cyclodextrin inclusion complex incorporated in
multivesicular liposomes. Pharmaceutical Development and Technology, 20(7), 775–781.
Wick, G., Knoflach, M. & Xu, Q. (2004). Autoimmune and inflammatory mechanisms in atherosclero-
sis. Annual Review of Immunology, 22, 361–403.
356
ABSTRACT: The objective of this research work is to investigate the compressive strength
of Glass-Carbon/epoxy hybrid laminate subjected to impact damage. Glass-Carbon/epoxy
hybrid laminate was fabricated using vacuum assisted compression molding and a novel
arrangement of quasi-isotropic sequence was followed. Coupon specimens were prepared
according to ASTM standard for low velocity drop impact and Compression After Impact
(CAI) to assess its compressive strength after the impact. Results showed that the stacking
sequence has minimized the impact damage area. The failure of the laminate after CAI was
majorly due to the buckling of the sub laminate. The Hybridized effect played a vital role in
the performance of the laminate.
1 INTRODUCTION
357
2.1 Fabrication
Seven layers of Plane weave Carbon (warp)-Glass (weft) hybrid (C-G) and six layers of Plane
weave Glass (G) (warp and weft) fiberswere selected for the laminate fabrication. Fibers were
cut to a size of 550 mm*400 mm. A total of 13 layers were taken for the laminate. A novel
quasi isotropic sequence was selected as shown in Figure 1 (a). Epoxy resin LY556 and hard-
ener HY952 were used in the ratio 10:1. Vacuum assisted compression molding was used to
fabricate the laminate. A pressure of 600 Pa was applied over the laminate for 4 hour and
cured at a temperature of 120°C for 12 hours and was left for ambient cure for 2 hours.
Figure 1 (b) shows the fabricated laminate of thickness 4.2 mm. Coupon specimens were pre-
pared according to ASTM standards to determine the basic properties and CAI properties.
2.2 Experiments
2.2.1 Basic properties
Uniaxial tensile test was performed to determine the basic properties. Samples were pre-
pared according to ASTM D3039. Table 1, shows the basic properties of Carbon-Glass/
epoxy hybrid laminate.
358
Figures 2–4 shows the samples of impact energies of 25J, 35J and 45J respectively. For 25J
and 35J, visible damage area was observed. However,the sample of 45J energy,perforation
was observed. Matrix cracking has taken place at 25J. A reduction in the area of impact
damage was observed as compared to the work of Cartie D.D.R. & P.E. Irving. (2002) on
pure Carbon.
The matrix crack that formed because of the impact usually propagates and ends at a place
where it meets a stiffer fiber. Since the adopted stacking sequence has stiffer fibers covering
all the directions, the crack propagation was terminated. In comparison with the damaged
359
Figure 5. CAI crack formation for (a) 25J (b) 35J (c) 45J.
360
Figures 9–11 shows the SEM image of CAI specimen for energies 25J, 35J and 45J respec-
tively. From the figures, it is evident that failure occurred because of the sub laminate buckling.
Matrix crack was dominant for the energy at 25J. The dominant parameters for the laminates
of energy greater than 25J were fiber pullout, fiber micro cracking and Delamination.
The damage of the CAI happens in the direction perpendicular to loading. The tensile or
the shear load that occurs as a result of impact initiates the matrix crack, which propagates
through the thickness of the laminate.
Figures 10 and 11 shows the SEM images of energy greater than 25J. These images witness
the fiber pullout, fiber fracture, matrix fiber de-bonding and results in severe plastic deforma-
361
362
4 CONCLUSION
From the above experimentation, the authors found out the following:
• Crack propagation due to impact can be terminated by stiffer fibers in the path of crack.
• For energies less than 25J, matrix crack is predominant.
• For energies greater than 25J other parameters such as fiber pullout, fiber micro cracking
and Delamination.
• Delamination occurs as a result of sub laminate buckling.
• The contact force and time were inversely proportional.
• The time to failure and the displacement were inversely proportional.
• Stacking sequence is also a main parameter to determine the CAI strength of the
laminate.
• Hybridization results in an increased performance, where two or three fibers in a stacking
sequence, as mentioned in this research, can complement each other in the performance as
a single laminate.
REFERENCES
Alaattin Aktas et al. 2014. Impact and post impact (CAI) behavior of stitched woven-knit hybrid com-
posites.Composite Structures. 116, 243–253.
AndreyShipsha & Dan Zenkert. 2005. Compression after impact strength of sandwich panels with core
crushing damage. Applied Composite Materials. 12, 149–164.
Aymerich F. & P. Priolo. 2008. Characterization of fracture modes in stitched and unstitched cross-
ply laminates subjected to low-velocity impact and compression after impact loading. Int. J. Impact
Engineering. 35, 591–608.
Berketis K. & D. Tzetzis. 2010. The compression after impact strength of woven and non-crimp fabric
reinforced composites subjected to long term water immersion ageing. J. Mater Sci. 45, 5611–5623.
Bin Yang et al. 2015. Study on the low velocity impact response and CAI behavior of foam filled sand-
wich panels with hybrid facesheet. Composite Structures. 132, 1129–1140.
Bruno Castanie et al. 2008. Core crushing criterion to determine the strength of sandwich composite
structures subjected to compression after impact, Composite Structures, 86, 243–250.
Cao D.F. et al. 2015. Compressive properties of SiC particle reinforced aluminum matrix composites
under repeated impact loading. Strength of Materials. 47, 61–67.
Cartie D.D.R. & P.E. Irving. 2002. Effect of resin and fibre properties on impact performance of CFRP.
Composites: Part A. 33, 483–493.
Daniele Ghelli & Giangiacomo Minak. 2011. Low velocity impact and compression after impact tests
on thin carbon/epoxy laminates. Composites: Part B. 42, 2067–2079.
Davies G.A.O. et al. 2004. Compression after impact strength of composite sandwich panels. Composite
Structures. 63, 1–9.
Gilioli A. et al. 2014. Compression after impact test (CAI) on NOMEX honeycomb sandwich panels
with thin aluminum skins. Composites: Part B. 67, 313–325.
Gonzalez E.V. et al. 2012. Simulation of drop weight impact and compression after impact tests on
composite laminates. Composite Structures. 94, 3364–3378.
Guoqi Zhang et al. 2013. The residual compressive strength of impact damaged sandwich structures
with pyramidal truss cores. Composite Structures. 105, 188–198.
Hakim Abdulhamid et al. 2016. Experimental study of compression after impact of asymmetrically
tapered composite laminate. Composite Structures. 149, 292–303.
363
364
Leeba Varghese
Department of Mechanical Engineering, Viswajyothi College of Engineering and Technology,
Ernakulam, India
ABSTRACT: Micro Electric Discharge machining can be used to generate micro features
and micro level dimensions on the work-piece irrespective of the hardness of the material.
This paper discusses the effect of polarity in tool wear during micro-EDM drilling of stain-
less steel work-piece (SS 304). An experimental investigation has been carried out to under-
stand the effect of change in polarity in tool wear using three different tool electrodes (Cu,
Brass and W). Direct polarity has significant impact over reverse polarity in reducing tool
wear for all the three electrodes. Further, observations indicated that material removal rate for
stainless steel is maximum in case of direct polarity.
Keywords: Tool wear rate, Tool electrodes, Material removal rate, Polarity
1 INTRODUCTION
EDM is a non-traditional machining process which involves the removal of electrical conduc-
tive material by a series of electric sparks between two electrodes submerged in a dielectric
fluid. The material removal mechanism involves the melting and vaporization of the work-
piece material caused by these electric sparks.
In the current scenario, micromachining of materials has become essential to make pre-
cise and accurate components (Yuangang et al. 2009). Micro-EDM is a recently developed
method that can be used for producing micro-parts within the range of 50 µm–100 µm. It is
an efficient machining process for the fabrication of miniaturized products, micro channels,
micro-metal holes and micromold cavities with a lot of merits resulting from its character-
istics of non-contact and thermal metal removal process (Yeakub Ali & Mohammed 2009).
The tool wear in micro-EDM directly affects the machining precision and efficiency
(Jingyu et al. 2017). Hence the minimization of tool wear is of great importance in micro-edm
365
An In House built Micro-EDM was used for the experimental investigation. Stainless steel speci-
mens (30 × 20 × 5 mm) for experiment were cut using abrasive cutters. Copper, Brass & Tungsten
are the tool electrodes used for study. The details of the experiment are given in Table 1:
The major input parameters used in micro-EDM are Input voltage, Input current, Pulse
On time and Pulse off-time. By conducting a lot of pilot experiments, optimum values of
process parameters which gives good machining is identified and selected for the study. The
experiment levels are presented in Table 2:
The properties of the tool electrodes and work-piece used are given in Tables 3 and 4.
Experiment is carried out using separate tools and work-pieces for both direct and reverse
polarities. The weights of both tool and work-piece are noted using precision weighing bal-
ance, before and after machining in each case.
Material Removal Rate (MRR) and Tool Wear Rate (TWR) are calculated using the
equation:
SI No Parameters Values
1 Input Voltage 50 V
2 Pulse On-time 110 μS
3 Pulse Off-time 30 μS
366
Specifications of electrodes
Properties Values
The effect of change in polarity on the tool wear rate and Material removal rate are discussed.
The results are based on the experimental investigation performed by machining stainless
steel (SS 304) specimen.
The response table for TWR and MRR is shown in Table 5:
367
tool wear rate. The tool wear is reduced by about 81.2% for copper tool, 76.3% for brass tool
and 58.8% for Tungsten tool when direct polarity is used instead of reverse polarity.
368
369
In this study, the effect of change in polarity is evaluated for various tool electrodes by
measuring TWR and MRR. The following conclusions are derived from the experimental
investigation:
1. In direct polarity, tool wear is found to be minimum compared to that of reverse polar-
ity on account of more heat generation at the work-piece surface as in case of direct
polarity.
2. Direct polarity also offers large MRR than reverse polarity for all the three electrodes.
3. The comparative study of Cu, Brass and W tool electrodes revealed that Tungsten tool
offers minimum tool wear in both direct and reverse polarities.
4. High MRR is provided by the copper tool electrode while machining stainless steel.
5. From physical observation, deposit of tool electrode material on the work-piece surface is
higher in case of reverse polarity.
The study can be further extended to determine the effect of change in polarity on other
work-piece materials rather than stainless steel to identify the suitable polarity that can be
used in micro-EDM. In the present paper, study on tool electrode wear is given more focus
and hence surface roughness is not analyzed. Since surface roughness is too a major output
parameter in micro-EDM, further studies are recommended to study the effect of polarity
on surface roughness.
ACKNOWLEDGEMENT
The authors would like to acknowledge the financial support of the Centre for Engineering
Research and Development (Proceedings no. C3/RSM86/2013, dated 09/02/2015 of Kerala
Technological University), Government of Kerala, India.
REFERENCES
Bissacco, G. Valentincic, J. Hansen H.N. & Wiwe. B.D. 2010. Towards the effective tool wear conrol in
micro-EDM milling. Int. J. Adv. Manuf. Technol., 47: 3–9.
Cyril Pilligrin, J., Asokan, P. Jerald, J. Kanagaraj G., J.M. Nilakantan & Nielsen, I. 2017. Tool speed and
polarity effects in micro EDM drilling of 316 L stainless steel. Production & Manufacturing Research,
5(1): 99–117.
Equbal, A. & Sood, A.K. 2014. Electric Discharge Machining; An overview on various areas of
research. Journal of Manufacturing and Industrial Engineering, 13: 1–6.
Jingyu, P., Zhang, L., Du, J., Zhuang, X., Zhou, Z., W. Shunkun & Zhu, Y. 2017. A model of tool wear
in electrical discharge machining process based on electromagnetic theory. International Journal of
Machine Tools & Manufacture, 117: 31–41.
Lee, S.H. & Li, X.P. 2001. Study of the effect of machining parameters on the machining characteristics
in electrical discharge machining of tungsten carbide, Journal of Materials Processing Technology,
115: 344–358.
Liua, Y., Zhangb, W., Zhangc, S. & Sha, Z. 2014. The Simulation Research of Tool Wear in Small Hole
EDM Machining on Titanium Aloy, Applied Mechanics and Materials, 624:249–254.
Yeakub Ali, M. & Mohammad, A.S. 2009. Effect of Conventional EDM Parameters on the Micro
machined Surface Roughness and Fabrication of a Hot Embossing Master Micro tool, Materials and
Manufacturing Processes, 24: 454–458.
Yuangang, W. Fuling, Z. & Jin, W. 2009. Wear-resist Electrodes for Micro-EDM, Chinese Journal of
Aeronautics, 22: 339–342.
370
ABSTRACT: In this study, 3 mm thick plates of AISI 316L austenitic stainless steel and
High Strength Low Alloy (HSLA) steel were dissimilar GTA welded and the tensile strength
and microstructural properties were investigated. The experimental trials were carried out
as per the Taguchi design and the welding current, welding speed, wire feed rate and filler
material were selected as the parameters. The results showed that the highest joint strength of
610 MPa was obtained at welding current of 100 A, welding speed of 9 cm/min and wire feed
rate of 1.6 m/min with 304 steel as the filler material. ANOVA revealed that the welding cur-
rent and wire feed rate are the most significant and least significant parameters, respectively.
The regression model showed that the welding speed and filler material and the welding cur-
rent and wire feed rate have interaction effect on the tensile strength of the joints.
1 INTRODUCTION
Austenitic stainless steel, AISI316L is known for its inherent superior properties like high
strength at elevated temperatures, increased resistance to pitting and general corrosion, high
creep strength etc. High strength low alloy (HSLA) steels possess very high strength, high
strength to weight ratio and increased corrosion resistance together with relatively low cost.
316L austenitic stainless steel and HSLA steels have wide range of combined application in
the marine, automotive and locomotive sectors. In spite of having excellent qualities, indi-
vidually, dissimilar fusion welded 316L/HSLA steel joints (by conventional techniques) are
prone to complications such as grain coarsening, sensitization, stress corrosion cracking, hot
cracking etc. Thus, accurate control of welding process parameters is essential to regulate the
heat input and hence to reduce the associated problems.
With regard to fusion welding of austenitic stainless steels, Anand Rao V & Daivanathan
B (2015) conducted detailed experimental study on TIG welding of 310 steel joints. In their
research, a total of 9 welded joints were fabricated and tested with the objectives of analysis
and optimization of the TIG welding process. The results showed that welding current of
120 A with 309L steel filler metal had produced the highest joint tensile strength of 454.6 MPa
while a welding current of 80 A with 316L filler metal produced the lowest tensile strength
of 517.9 MPa. Navid Moslemi et al. (2009) conducted a study on the effect of welding cur-
rent on the mechanical and microstructural characteristic of TIG welded 316 steel joints. The
mechanical characteristics of the welded joints such as tensile strength and microhardness
were evaluated in the study. Microstructural studies confirmed the presence of secondary
sigma phase that caused embrittlement in the weld zone.
Bharatha et al. (2013) reported the process optimization and joint analysis of 316 steel
TIG welded joints using Taguchi technique. With regard to dissimilar fusion welding of
stainless steels and HSLA steels, only a very few works are reported in the literature. Anant
et al. (2017) have developed a special nozzle for the GMAW and successfully welded 25 mm
thick plates of dissimilar 304L stainless steel and SA543 HSLA steel. The authors have used
308 L steel as filler metal and the weld was completed by multiple passes.
371
The base materials selected for this research were 3 mm thick rolled sheets of HSLA steel,
IRS-M42-93 and austenitic stainless steel, AISI 316L. The composition of HSLA steel
and 316L steel is given in Table 1. The optical micrographs of the base metal are shown in
Figure 1. The base 316L steel consists of mostly coarse austenite with minor amount of fer-
rite. Annealed twins crossing the grain boundary interface were observed. The base HSLA
steel is with approximately equiaxed and fine grained ferritic–pearlitic microstructure. Work
pieces of 150 mm × 50 mm × 3 mm size were cut by cold shearing. Three types of filler wires
were used in this investigation; AISI 304 steel, 309L steel and 316L steel. The composition of
each of the filler material is shown in Table 3. The welding trials were performed using auto-
mated TIG welding equipment and the joints were finished in a single pass welding.
The Taguchi – L9 orthogonal design is selected as the design of experiment (DOE) tech-
nique (Krishnaiah, K. & Shahabudeen, P. 2012). The most important stage in the DOE is the
very selection of DOE technique and the selection of the control factors. The travel speed
(welding speed), welding current, wire feed rate and filler material were selected as the factors
(variables) of the design. The range and upper and lower bounds of the selected variables
were fixed by trial and error with the criterion of visually defect free joints. The four factor,
three level; L-9 design matrix given in Table 2 was developed using the statistical software,
Minitab 17. Two sets of joints were fabricated at each set of factor setting and the average of
the response was considered for modeling and analysis.
Composition (wt%)
Material C Mn Si P S Al Cu Cr Mo Ni Nb Ti V N Fe
HSLA Steel; 0.11 0.42 0.32 0.10 0.01 0.029 0.31 0.54 0.001 0.22 0.001 0.002 0.002 – Bal
IRSM-42-97
AISI 316L steel 0.03 2.0 0.75 0.045 0.03 – – 17 2–3 10–14 – – – 0.1 Bal
Figure 1. Optical micrographs of base materials; (a) 316L steel (b) HSLA steel.
372
Sl. No. Filler material Current (A) Travel speed (cm/min) Feed (m/min)
1 SS 304 80 8 1.2
2 SS 304 90 8.5 1.4
3 SS 304 100 9 1.6
4 SS 316L 80 8.5 1.6
5 SS 316L 90 9 1.2
6 SS 316L 100 8 1.4
7 SS 309L 80 9 1.4
8 SS 309L 90 8 1.6
9 SS 309L 100 8.5 1.2
Composition (Wt. %)
Material Ni Cr Si Mn C P S Mo N Cu Fe
The specimens for the tensile test were cut by conventional milling process with the geom-
etry as per ASME SEC IX (2015): Boiler and Pressure Vessel Code (QW-462 Test specimen).
The tensile test was done on a universal testing machine with 50 kN capacity at crosshead
speed of 4 mm/min. The specimen for optical microscopy was prepared by using standard
metallographic procedures. The etched specimens were observed under optical microscope
model: BX51.
Visual examination of welded joints showed that in seven out of nine welding conditions,
no visible surface defects were observed. But, for the sixth and seventh runs, the joints were
observed with visual defects such as excessive deposition and lack of fusion on the HSLA
side, respectively. The observed surface defects may be caused due to excessive/low heat gen-
eration as a result of the combined effect of welding current, welding speed, and filler metal
feed rate.
The average tensile strength of the joints is shown in Table 4. The highest tensile strength
of 610 MPa and the least strength of 415 MPa is obtained for the fifth and third trail run,
respectively. The third trial that resulted the highest joint strength is corresponding to weld-
ing current of 100 A, welding speed of 9 cm/min and feed rate of 1.6 m/min. Whereas the
joint with the least strength correspond to welding current of 80 A, welding speed of 8.5 cm/
min and wire feed rate of 1.6 m/min. For the best joint (highest strength), the filler material
used was 304 steel whereas for the joint with the least strength, the filler material was 316L
steel.
Figures 2 and 3 portray typical optical micrographs of the weld nugget region of the joints
that possess the lowest and highest joint strength, respectively. Figure 2 reveals the formation
of coarse grain in the nugget zone. Also, it seems that the grain boundaries are characterized
by the precipitation of secondary phase. The precipitation may be sigma phase as a result of
the very high heat generation and the use of 316L filler metal during welding. Under high
heat input, there will be increased distribution of sigma phase in nugget zone with a wider
373
Coded values of
variables Actual values of variables
Sl. M I U F TS S/N
No. M I U F (Grade) (A) (cm/min) (m/min) (Mpa) Ratio
Figure 2. Optical micrographs of the weld nugget of the lowest strength joint.
Figure 3. Optical micrographs of the weld nugget of the highest strength joint.
heat affected zone (HAZ). Under tensile load, cracks can easily propagate through the grain
boundaries causing failure at low loads. Thus the observed microstructure clearly substanti-
ates the lowest joint strength resulted for the joint.
Referring to Figure 3, it seems that the weld nugget is with low concentration of second-
ary phase formations. The marginally low heat generation and the use of 304 steel filler wire
would be the probable causes for the low concentration of secondary phases. Vitek, J. M. &
David, S. A. (1984) reported that the sigma phase reaction is accelerated by the large scale
374
The response, tensile strength (TS) of the joint is modeled as a function of the four factors
that were selected for the DOE. The generalized form of the regression model is given by;
TS = f (I, U, F, M) (1)
where I is the welding current, U is the welding speed, F is the wire feed rate and M is the
filler material. The coded and actual values of the variables and the values of TS are given in
Table 4. The regression model developed using the proprietary statistical software; Minitab17
is given in equation (2).
375
Figure 5. Contour plot of Tensile Strength (TS) vs current & feed rate and TS vs filler metal &
welding speed.
376
4 CONCLUSION
In the present research, dissimilar AISI316L stainless steel and HSLA steel were successfully
TIG welded and the joint tensile strength and microstructure were investigated. The follow-
ing conclusions were made.
– The highest joint strength of 610 MPa is obtained for the joint fabricated at welding cur-
rent of 100 A, welding speed of 9 cm/min and wire feed rate of 1.6 m/min with 304 steel as
filler metal.
– The micrographs revealed signs of secondary phase evolution at the grain boundaries in
the weld nugget region.
– ANOVA of the results showed that the welding current and wire feed rate are the most
significant and least significant parameter, respectively.
– Analysis of the developed model suggest that the welding current and wire feed rate have
significant interaction effect on the joint strength and the welding speed and filler metal
have moderate interaction effect on the joint strength.
REFERENCES
Anand Rao, V. & Deivanathan, R. 2014. Experimental Investigation for welding aspects of stainless
steel 310 for the Process of TIG welding. Procedia Engineering, 97, 902–908.
Bharath, P. Sridhar, V.G. & Senthil Kumar, M. 2014. Optimization of 316 stainless steel weld joint char-
acteristics using Taguchi technique, Procedia Engineering, 97, 881–891.
Krishnaiah, K. & Shahabudeen, P. 2012. Applied Design of Experiments and Taguchi Methods.
Nabendu Ghosh, Pradip Kumar Pal. & Goutam Nandi 2016. Parametric optimization of MIG welding
on 316L austenitic stainless steel by Grey-based Taguchi method, Procedia Technology 25, 1038–1048.
Navid Moslemi, Norizah Redzuan, Norhayati Ahmad & Tang n Hor. (2015) Effect of current on char-
acteristic for 316 stainless steel welded joint including microstructure and mechanical properties,
Procedia CIRP. 26. 560–564.
Ramachandran, K.K. Murugan, N. & Shashi Kumar, S. 2015. Influence of tool traverse speed on the
characteristics of dissimilar friction stir welded aluminium alloy, AA5052 and HSLA steel joints.
Archives of civil and mechanical Engineering, 15, 822–830.
Ramkishor Anant & Ghosh, P.K. 2017. Ultra-narrow gap welding of thick section of austenitic stain-
less steel to HSLA steel, Journal of Materials Processing Technology, 239, 210–221.
Vitek, J.M. & David, S.A. 1984. The sigma phase transformation in austenitic stainless steels, 65th
Annual AWS Convention in Dallas, Tex.
377
ABSTRACT: Fracture mechanics is a very important tool used for improving the life
cycle of mechanical components. During manufacturing, flaws or cracks will be formed in
all metal structures. Studying and monitoring the propagation of the crack in a component
forms the core of fracture study. In the present work, a parametric study on the propaga-
tion of semi elliptical crack in a turbine blade is carried out. A turbine blade with its hub is
modeled using CATIA software. The model is then imported into ANSYS Workbench and
a Finite Element analysis is performed. Rotational velocity is applied on cracks at different
orientations ranging from 0° to 90° at different crack depth to half crack length ratios and
stress intensity factor (K) is determined for two cases with thermal load and without thermal
load i.e. static load case. The Finite Element results are validated by an empirical solution of
Raju-Newman solution using the MATLAB software. The results will be useful in the assess-
ment of structural integrity of the component.
1 INTRODUCTION
Fracture mechanics analysis forms the basis of damage tolerant design methodology. Its
objectives are the determination of stress intensity factor (K), energy release rate (G), path
independent integral (J), Crack Tip Opening Displacement (CTOD) and prediction of mixed
mode fracture, residual strength and crack growth life.
Solution for Stress Intensity Factor (SIF) in mode I for a surface crack in a plate is pre-
sented in empirical form by Newman & Raju (1981). This empirical equation is presented as
a function of parametric angle, depth and length of the crack as well as thickness and width
of the plate for tension and bending loads. Witek (2011) discusses the failure of compressor
blade due to due to bending fatigue loads, and the calculation of SIF is performed using
the Raju-Newman solution for a semi-elliptical crack. Barlow & Chandra (2005) discuss
the fatigue crack growth rates at the fan blade attachment in an aircraft engine due to cen-
trifugal and aerodynamic loads. Song et al. (2007) deliberate on the failure of a jet-engine
turbine blade due to improper manufacturing techniques. One of the observations that can
be made from the above-mentioned discussions is that the cracks may originate in any form
and direction due to improper design or manufacturing methods. Hence it becomes impera-
tive to perform analysis to obtain SIF values of cracks at various orientations, for growth
analysis.
In this work, fracture analysis is performed on the turbine blade of third stage turbine
bucket of a gas turbine. Initially, static analysis is carried out to obtain the region of crack
nucleation, and then semi-elliptical cracks at various orientations with respect to the rotor
axis is analyzed using the finite element technique to obtain the values of stress intensity fac-
tors. A parametric study varying the crack parameters is conducted to analyse their effects on
the stress intensity factor in three modes.
381
A turbine blade constitutes a part in the turbine region of a gas turbine engine, and functions
as the component responsible for absorbing energy from the gas at high temperature and
pressure created in the combustor of the engine. The turbine blades very often are the limit-
ing components of gas turbines since they are subjected to extreme thermal and fluid stresses.
Due to this reason, turbine blades are often made out of materials like alloys of Titanium
containing exotic additives. They also use different and ingenious techniques of cooling, such
as air channels inside the blade itself, boundary layer cooling, and thermal barrier coatings.
Density 4540 kg/m3
Young’s modulus 120 GPa
Shear modulus 45.5 GPa
Poisson’s ratio 0.32
Ultimate tensile strength 1010 MPa
Yield strength 990 MPa
Fracture toughness 148 MPa√m
Coefficient of thermal expansion 8.1 × 10-6/°C
T1 = 500°C
Operating temperatures T2 = 550°C
T3 = 601°C
382
a a a c
K I = (St + HSb ) Π F , , ,φ (1)
Q t c b
The term HSb can be ignored because only the tensile loading is considered. Hence, the
equation for stress intensity factor becomes
a a a c
K I = (St ) Π F , , ,φ (3)
Q t c b
The Raju-Newman solution for the stress intensity factor is for tensile and bending stress.
And since only the rotational loading is considered for analysis, the bending load part of
383
the solution is ignored. Hence, coming to the boundary conditions applied to the plate, the
bottom face of the plate is considered as a fixed support and the on the top face, a tensile
pressure is applied to simulate the centrifugal load. This simulates the crack opening mode.
Figure 3 shows a comparison of mode 1 stress intensity factor values obtained from ANSYS
Workbench and the target solution of Raju-Newman empirical equation given by (3).
The present solution is in good agreement with empirical solution of (1981). The maxi-
mum variation is found to be 2.457%. These values can be considered satisfactory and hence
the Finite Element model developed is considered to be validated.
384
Figure 6. K1 for crack angles 0° to 90° at the Figure 7. K2 for crack angles 0° to 90° at the
interval of 15°. interval of 15°.
The crack mesh has 7 solution contours for each a/c ratio. The Post-Processing of the solu-
tion involves exporting the values to Microsoft Excel for further analysis. For the parametric
studies, the average of the values obtained from all seven solution contours is considered.
Seven different analyses of the blade are carried out with the crack at 0°, 15°, 30°, 45°,
60°, 75° and 90° orientation with respect to the longitudinal axis of the turbine rotor. The
crack length and depth in all the seven cases are kept constant at 2.0 mm and 0.80 mm (2002)
respectively. The plots of SIF along the crack front for the crack angles 0° to 90° at the inter-
val of 15° are shown in Figures 6, 7 and 8.
The behavior of the values of K1 is shown in Figure 6. It is observed that the values of
K1 are highest for the 0° crack which indicates that cracks at 0° angle to load have the low-
est chance of propagation of crack compared to others. The SIF values decrease with the
increase in crack orientation, and minimum for 90° crack which has highest chance of propa-
gation of crack. It is observed in Figure 7 that the values of SIF in sliding mode vary from
a positive number to a negative number between the crack front for all the angles except for
90° angle where in it reverses its behavior. The value of K3 plotted in the graph is shown in
Figure 8. For the cracks of all the orientations, the stress intensity factor values are observed
385
4 PARAMETRIC STUDIES
The parametric studies of this project provide an insight into the stress intensity factor behav-
ior at the crack front for various thermal loads and static load condition. Semi-elliptical crack
length, crack depth and crack orientations are the parameters considered for the analysis. In
the current work, how the stress intensity factor varies in three modes for cracks at different
crack orientations with different a/c ratios and different loading conditions are studied.
Thermal-Static loads: For thermal-static load case, Titanium alloy of yield strength 990 MPa,
fracture toughness 148 MPa (m)1/2 and coefficient of thermal expansion 8.1 × 10-6/°C with tem-
perature T3 = 601°C are considered.
Mode-1: The behavior of mode-1 (opening mode) stress intensity factor of thermal
and static load conditions are very similar for cracks of all orientations. From the graphs
(Figures 12–17), the values of K1 for cracks of different a/c ratios proportionally decrease
with the increase in crack orientations. There is a notable variation in the proportional
decrease of K1 value with the increase in crack orientation i.e. less reduction at one end of
crack front in comparison with the other end where the variation is large. This can be attri
buted to the curvature and twist along the span of the blade, leading to K1 values which result
in crack closure due to compression. Maximum value of K1 for thermal load case is reduced
nearly 5 times in comparison with static load case. The value of K1 becomes maximum for
larger value a/c ratio in static load case where as in thermal load case the value of K1 increase
from crack angle 0° to 30° and keeps on decreasing for higher crack angle. This shows that
cracks with a larger length when compared to the depth have a higher tendency of propaga-
tion near the crack tip and cracks which are comparatively smaller in length have a higher
tendency of propagation near the Centre of crack front, leading to component failure.
The Figures 9 to 13 show that the values of K1 changes from along the crack length at 0°
crack angle to diagonal way continuously with the increase in crack angle. The extreme values
of K1 are positive in static load case where as in thermal load case, the value of K1 changes
Figure 9. K1 values for 0° crack angle. Figure 10. K1 values for 30° crack angle.
Figure 11. K1 values for 60° crack angle. Figure 12. K1 values for 75° crack angle.
386
Figure 16. K1 values for 60° crack. Figure 17. K1 values for 75° crack.
from a positive value to a negative value between the two crack fronts except for a crack angle
0°. Two important parameters; semi-elliptical crack length and depth are considered for the
analysis, and variation in the stress intensity factors in three modes for cracks at different
orientations and different a/c ratios are studied in the present work.
The Figures 9 to 13 show that cracks with a larger length when compared to the depth have
a higher tendency of propagation, leading to component failure.
The behavior of K1 values for cracks of orientations 60° and 75° is observed to be differ-
ent from the rest. The K1 values, from Figures 9–13, are negative for these two cracks. This
can be attributed to the curvature and twist along the span of the blade, leading to K1 values
which result in crack closure due to compression.
As seen from the Table 2 for both static and thermal load case, extreme values of K1 for all
the orientation of the crack is located near the proximity of crack tip except for crack angle
0° where the maximum value of K1 is located at the centre of the crack length. For static load
case, (K1) max = 30 Mpa √m at a/c ratio 0.80 and the crack orientation 0°. Similarly (K1)
min = 3.6 Mpa √m at a/c ratio 0.40 and the crack orientation 90°. Extreme values of K1 are
decreasing proportionally with the increase of crack orientation.
Thermal load case, (K1) max = 6.6 Mpa √m at a/c ratio 0.80 and the crack orientation 30°.
Similarly (K1) min = -3.2 Mpa √m at a/c ratio 0.40 and the crack orientation 30°. Maximum
value of K1 initially increases from crack angle 0° to 30° and decreases with the further
387
Thermal load →
increase of crack angle and minimum value of K1 initially decreases from crack angle 0° to
30° and increases with the further increase of crack angle.
As seen from the Table 3 for both static and thermal load case, extreme values of K1 for all
the orientation of the crack is located near the proximity of crack tip except for crack angle
0° where the maximum value of K1 is located at the centre of the crack length. For static
load case, (K1) max = 30 Mpa √m at a/c ratio 0.80 and the crack orientation 0°. Similarly
388
5 CONCLUSIONS
Thermal-static loads
Maximum value of K1 for thermal load case is reduced nearly 5 times in comparison with
static load case. From the FEA results, for static load case, it can be concluded that fracture
by mode-1 is likely to occur because the values of K1 and it is found at 0° crack orientation.
The maximum values K1 and K2 for thermal load case are found at crack angle 30° and 0°
respectively.
For the temperature from T1 = 500°C to T2 = 550°C, maximum value of K1 decrease by
14.6% and further increase of temperature from T2 = 550°C to T3 = 601°C, maximum value
of K1 decrease by 19.5%. From FEA results thermal load case, it is observed that with the
increase of temperature, the value of K1 and K3 decreases in proportion along the crack
length whereas the value of K2 increases with the increase of temperature.
ACKNOWLEDGMENT
We would like to express my sincere gratitude to TEQIP III and Management B.M.S. College
of Engineering, Bengaluru, for extending financial support in publishing and presenting the
paper.
REFERENCES
389
ABSTRACT: Three-phase fluidized beds have much significance since they offer
excellent heat and mass transfer rates. Hence they are utilized in major industries such as
biotechnology, pharmaceuticals, food, chemicals, and environmental and refining plants.
Computational Fluid Dynamics (CFD) is an economical method by which to study the
hydrodynamic properties of three-phase fluidized beds, because experimental and theoretical
methods have their own limitations. A study of the hydrodynamics of a gas-liquid-solid
(three-phase) fluidized bed has been made using ANSYS Fluent simulation software. The
simulation has been carried out on a cylindrical column 1.8 m tall and 0.1 m diameter. Glass
particles of diameter 2.18 mm and 3.05 mm were used for initial bed heights of 0.267 m and
0.367 m. The hydrodynamic properties, such as bed expansion, holdup of all three phases,
and pressure drop across the column, were studied by varying inlet water velocity and inlet
air velocity. Finally, comparison was made between the results obtained from the simulation
and the experimental results. The CFD simulation result shows excellent agreement with the
experimental results.
1 INTRODUCTION
393
a 2-D model for a three-phase slurry reactor by using the Eulerian–Lagrangian approach.
They investigated the transient characteristics of flow of all the phases.
Sivalingam and Kannadasan (2009) carried out an experiment on a three-phase fluidized
bed. They made an effort to study the relationship between fluid flow rates and hydrodynamic
characteristics. The conclusion drawn from the experiment was that the gas flow rate influ-
ences the design of a fluidized bed. Sau and Biswal (2011) conducted an experimental study
and a 2-dimensional CFD study. They made an attempt to study and compare the hydrody-
namic properties of a two-phase tapered fluidized bed using the Eulerian approach. They
concluded that 3-dimensional study would provide better results than 2-dimensional study.
Mohammed et al. (2014) presented an experimental study on a three-phase fluidized bed by
considering water, kerosene and spherical plastic particles as different phases. From the experi-
ment, they found that the holdup of the dispersed phase increases with its velocity as well as
particle size. However, it decreases with continuous phase velocity. Jena et al. (2008) presented
an experimental study on a three-phase fluidized bed. They considered glass beads of differ-
ent diameters as the solid phase. Liquid was taken as continuous phase. They found that the
holdup of gas increased with gas velocity and it increased when particle size was increased.
From the brief review above of the literature, it is clear that several experimental studies
have been done on three-phase fluidized beds. It was concluded that only a few works have
been done on three-phase fluidized beds using CFD. For better comparison with experimen-
tal results, 3-dimensional is mostly preferred over 2-dimensional simulation. This is because
3-dimensional simulation gives more realistic results. Studying the three-phase fluidized bed
using CFD is found to be a promising approach, with reduced cost and effort when com-
pared to experiments.
In this work, the important hydrodynamic properties of a three-phase fluidized bed have
been studied using CFD. For this, ANSYS Fluent simulation software is used to model and
solve the problem using a Eulerian approach. The simulation results are validated by com-
parison with experimental results.
A cylindrical column of 1.8 m height and 0.1 m diameter has been considered for the study.
Glass, water and air are taken as solid, liquid and gas phases, respectively. Water is treated as
a continuous phase. Secondary phases are glass particles and air. A uniform velocity inlet and
pressure outlet boundary condition with mixture gauge pressure 0 Pa has been used. At the
wall, no slip condition has been used for water. X = 0, Y = 0 specified shear was used for air
and glass. The parameters used in the simulation are tabulated in Table 1.
394
Parameter Value
2.4 Equations
2.4.1 Continuity equation
∂
(ε k ρk ) + ∇(ε k ρk uk ) = 0 (1)
∂t
where
ε g + ε1 + ε s = 1 (2)
∂
∂t
(
)
( ρlε lul ) + ∇ ( ρlε lulul ) − ε l∇P + ∇(ε l µeff ,l ∇ul + (∇ul )T + ρlε lg + Mi,l (3)
395
∂
∂t
(
) (
)
(
ρgε g ug + ∇ ρgε g ug ug = −ε g∇P + ∇(ε g µeff ,g ∇ug + ∇ug ( ) )+ ρ ε g−M
T
g g i,g (4)
Grid independent study has a major role in determining the time required for the simulation.
As the number of elements/cells decreases, simulation time reduces. But a decreased number
of elements leads to variation from the expected results. Hence it is important to find the
number of elements which leads to accurate results with a minimum number of iterations.
Simulations were performed for different numbers of elements. The values of pressure drop
obtained are shown in Figure 2.
From Figure 2 it is clear that as the number of cells increases the value of pressure drop
also increases. With 55390 cells, the pressure drop obtained is 1380 Pa, which is close to
the experimental result with the same boundary conditions. When the number of cells was
increased further, the variation in the value of pressure drop is smaller. Hence the mesh with
55390 cells was chosen for the calculation to minimize the simulation time.
A fluidized bed which uses liquid as primary phase, and gas and solid particles as secondary
phases has been modeled and solved using ANSYS Fluent. Bed heights of 0.267 m and 0.367 m
were used to study the fluidized bed with 2.18 mm and 3.05 mm glass beads. The fluidized bed
has been studied by varying inlet superficial velocity of water and inlet air velocity.
During the simulation of the fluidized bed, variation in the profile of the bed is observed,
as shown in Figure 3. But no major change in the profile is seen after some time. This is the
proof that the bed is in a fluidized condition.
Figure 2. Values of pressure drop for different numbers of cells (Vl = 0.04 m/s, Vg = 0.02 m/s).
396
It is clear from the contour that the bed has reached a fluidized state. The water volume
fraction is relatively smaller in the fluidized region and that of air is greater when compared
to the remaining part.
397
The evaluation of experimental and simulation results of air holdup is made in Figure 12
for glass particles of 3.05 mm at 0.267 m bed height for inlet air velocity of 0.02 m/s. The
match between simulation result and experimental result is excellent.
398
sure drop does not increase much. Drop in pressure varies inversely with air inlet velocity.
This is because as the inlet air velocity increases gas holdup increases, and water holdup in
the column decreases. Because the density of air is much lower than that of water, as water
holdup decreases pressure drop also decreases.
Figure 14 shows the behavior of pressure drop across the column with inlet water for
3.05 mm glass particles at 0.267 m and 0.367 m bed heights. The pressure drop increases when
the initial bed height increases.
399
5 CONCLUSIONS
A study of a gas-liquid-solid three-phase fluidized bed has been made using a Eulerian–Eul-
erian approach. A 3-dimensional model having 1.8 m height and 0.1 m diameter was devel-
oped. The hydrodynamic properties, such as bed expansion, air holdup, water holdup, and
pressure drop, were studied by varying inlet water velocity, inlet air velocity, particle diameter
and bed height. The major conclusions from the study are summarized as follows:
Bed expansion is directly proportional to water velocity. It is not much affected by inlet air
velocity. It increases when static bed height is increased and reduces when particle size is
increased.
Gas holdup is directly proportional to inlet air velocity. It decreases when inlet water veloc-
ity is increased.
Glass holdup decreases when inlet water velocity is increased and only slight variation is
observed in glass holdup when inlet air velocity is increased.
Water holdup varies directly with inlet water velocity and it decreases with an increase in
air velocity.
Pressure drop varies directly with inlet water velocity. It varies inversely with inlet air
velocity. Pressure drop increases when initial bed height and particle diameters were
increased.
The simulation results show excellent agreement with experimental results.
REFERENCES
ANSYS. (2003). Fluent 6.1 User’s Guide (pp. 1–5). Canonsburg, PA: ANSYS.
Blazek, J. (2001). Computational fluid dynamics: Principles and applications (1st ed.). Oxford, UK:
Elsevier.
Jena, H.M., Sahoo, B.K., Roy, G.K. & Meikap, B.C. (2008). Characterization of hydrodynamic proper-
ties of a gas-liquid-solid three-phase fluidized bed with regular shape spherical glass bead particles.
Chemical Engineering Journal, 145, 50–56.
Li, Y., Zhang, J. & Fan, L.S. (1999). Numerical simulation of gas-liquid-solid fluidization system using
a combined CFD-VOF-DPM method: Bubble wake behaviour. Chemical Engineering Science, 54,
5101–5107.
Mohammed, T.J., Sulaymon, A.H. & Abdul-Rahmun, A.A. (2014). Hydrodynamic characteristic
of three phase (liquid-liquid-solid) fluidized beds. Journal of Chemical Engineering and Process
Technology, 5, 188.
Saha, S.N. Dewangan, G.P. & Gadhewal, R. (2016). Gas-liquid-solid fluidized bed simulation.
International Journal of Advanced Research in Chemical Science, 3, 1–8.
Sau, D.C. & Biswal, K.C. (2011). Computational fluid dynamics and experimental study of the
hydrodynamics of a gas–solid tapered fluidized bed. Applied Mathematical Modelling, 35, 2265–2278.
Sivalingam, A. & Kannadasan, T. (2009). Effect of fluid flow rates on hydrodynamic characteristics
of co-current three phase fluidized beds with spherical glass bead particles. International Journal of
ChemTech Research, 1, 851–855.
Witt, P.J., Perry, J.H. & Schwarz, M.P. (1998). A numerical model for predicting bubble formation in a
3D fluidized bed. Applied Mathematical Modelling, 22, 1071–1080.
Zhang, X. & Ahmadi, G. (2005). Eulerian–Lagrangian simulations of liquid-gas-solid flows in three-
phase slurry reactors. Chemical Engineering Science, 60, 5089–5104.
400
S. Ravikumar
GKM College of Engineering and Technology, Tamilnadu, India
S. Kanagasabapathy
National Engineering College, Kovilpatti, Tamilnadu, India
V. Muralidharan
B.S. Abdur Rahman University, Chennai, Tamilnadu, India
ABSTRACT: The belt conveyor system is used for conveying large volumes of materials
from one location to another. The Self-Aligning Troughing Roller (SATR) is one of the critical
components in the belt conveyor; it is quite influential in riding the belt conveyor in fault free
conditions. SATR has to operate under heavy axial and shear forces, which lead to frequent
failures. Hence continuous monitoring and fault diagnosis of SATR becomes essential. The
self-aligning troughing idler arrangement has a long roll to support the belt and handle maxi-
mum loads per cross-section. The self-aligning troughing roller has machine elements, includ-
ing the ball bearing, a central shaft and the external shell. In the belt conveyor system certain
faults, such as Bearing Flaws (BF), Central Shaft Faults (CSF), combined Bearing Flaws and
Central Shaft Faults (BF & CSF) occur frequently. A prototype investigational model has been
made with the above mentioned faults and the vibration signals were attained from the set-up.
The vibration data acquired was fed as algorithm input into Artificial Neural Networks (ANN)
and Naive Bayes (NB) algorithms, which are used for classification of acquired signals. In the
present effort, the artificial neural networks and Naive Bayes algorithms were found to achieve
82.1 and 90% classification accuracy, which acknowledges that the Naive Bayes algorithm has
a better gain over artificial neural networks in the field of fault diagnosis applications.
1 INTRODUCTION
The self-aligning troughing roller (SATR) is an essential element of the belt conveyor system.
It may fail due to multidimensional forces, derisory lubrication, culpable sealing, uneven
loading and improper training of belt. The critical elements that fail periodically in the self-
aligning troughing roller are the groove ball bearing and the central shaft.
The malfunction of these parts directly affects the efficiency of the SATR, which can
hinder the proper functioning of the belt conveyor system. In these circumstances, to avoid
overwhelming damage of the belt conveyor, a failure prediction system is a major require-
ment. The various conditions for this research are SATR running in Fault Free Condi-
tion (FFC), Bearing Fault Condition (BFC), Central Shaft Fault (CSF) and Bearing Fault
and Central Shaft Fault (BFC & CSF). The malfunction of these components affects the
functioning of SATR which in turn leads to under-performance of the belt conveyor system.
401
2 RELATED WORK
Murru (2016) presented an original algorithm for initialization of weights in back propa-
gation neural net with application to character recognition. The initialization method was
mainly based on a customization of the Kalman filter, translating it into Bayesian statistics
terms. A metrological approach was used in this context considering weights as measure-
ments, modeled by mutually dependent normal random variables. The algorithm perform-
ance was demonstrated by reporting and discussing results of simulation trials. Results were
compared with random weights initialization and other methods. The proposed method
showed an improved convergence rate for the back propagation training algorithm.
Wong et al. (2016) proposed a Probabilistic Committee Machine (PCM), which combines
feature extraction, a parameter optimization algorithm and multiple Sparse Bayesian Extreme
Learning Machines (SBELM) to form an intelligent diagnostic framework. Results showed
that the proposed framework was superior to the existing single probabilistic classifier. Zhang
et al. (2016) developed a Bayesian statistical approach developed for modal identification using
the free vibration response of structures. The results indicated that a frequency-domain Baye-
sian framework was created for modal identification of Most Probable Values (MPVs) and
modal parameters Mori & Mahalec (2016) introduced a decision tree structured conditional
probability representation that can efficiently handle a large domain of discrete and continuous
variables. Experimental results indicated that our method was able to handle the large domain
discrete variables without increasing computational cost exponentially.
Hu et al. (2016) developed the framework of Non-Negative Sparse Bayesian Learning
(NNSBL). The algorithm obviated pre-setting any hyper parameter, where the Expectation
Maximization (EM) algorithm was exploited for solving this NNSBL problem. Without a
prior knowledge of the source number, the proposed method yielded performances in the
underdetermined condition illustrated by numerical simulations.
Kiaee et al. (2016) utilized the concept of random effects in the Extreme Learning Machine
(ELM) framework to model inter-cluster heterogeneity, provided the inherent correlation
among the samples of a particular cluster is taken into account, as well. The proposed ran-
dom effect model includes additional variance components to accommodate correlated data.
Inference techniques based on the Bayesian evidence procedure were derived for the estima-
tion of model weights, random effect and residual variance parameters as well as for hyper
parameters. The proposed model is applied to both synthesis and real-world clustered data-
sets. Experimental results showed that our proposed method can achieve better performance
in terms of accuracy and model size, compared with the previous ELM-based models.
Wang et al. (2016) used the Gaussian kernel function with smoothing parameter to
estimate the density of attributes. A Bayesian network classifier with continuous attributes
was established by the dependency extension of Naive Bayes classifiers. The information
provided to a class for each attributes as a basis for the dependency extension of Naive Bayes
classifiers is analyzed. Experimental studies on UCI datasets showed that Bayesian network
classifiers using Gaussian kernel function provided good classification accuracy compared
to the approaches when dealing with continuous attributes. Magnant, et al. (2016) proposed
402
3 EXPERIMENTAL SET-UP
SATR fault diagnosis involves several steps regarding the conveyor set-up: (i) design and
fabrication with multiple fault conditions, (ii) acquisition of signals, (iii) feature extraction
and (iv) feature classification. The procedure of the process can be clearly understood from
Figure 1. Initially, the belt conveyor model is allowed to run with parts working in a fault
free condition and the signals were acquired. One set of shaft and bearing are prefabricated
with faults. As given in Table 1, the outer ring thickness of 4.5 mm and 4.51 mm respectively
were ground for 4 mm and 3.90 mm respectively for developing faults in the groove bearings.
The bearing was attached to a self-aligning troughing roller set-up in the conveyor system.
Similarly the central shaft was ground to create shaft fault as shown in Table 2.
403
Diameter of shaft mm
Before After
Sl. No. grinding grinding Side
The roller bearing (Model No KG6200Z) and central shaft were prefabricated with a fault
to acquire the vibration readings as given in Tables 1 and 2. The different fault conditions, like
SATR having Bearing Fault (BF), central shaft fault (CSF) and combined central shaft and
bearing fault (CSF & BF) were subsequently set up one by one and the corresponding vibra-
tion signals were acquired. Figure 2 shows a schematic arrangement about the SATR vibra-
tion analysis experimental set-up. A piezoelectric accelerometer sensor (Model No 3055B1)
was mounted over the vibration zone to absorb the vibrations generated due to faults.
A signal conditioning unit was connected to the accelerometer sensor which in turn was
connected to an Analog-to-Digital Converter (ADC). The digital vibration signals acquired
from the ADC were fed to the computer for further processing through relevant tools. The
LabVIEW software was used to record the vibration signals in the digital form and store them
in the computer hard disk memory. It is further processed different features were extracted by
add-on in Microsoft Excel.
4 FEATURE EXTRACTION
Different statistical parameters are used to show the fault conditions in the fabricated model.
The various statistical parameters include mean, median, mode, standard error, standard
deviation, kurtosis, skewness, minimum value, maximum value, sample variance and range.
404
Standard deviation 71
Skewness 63.45
Sample variance 50
Standard error 44.66
Range 38.01
5 CLASSIFIER
P ( y / x1,… xn / y )
P ( y / x1,… xn ) = P ( y )P (1)
P ( x1,… xn )
It can be used for a Maximum A Posteriori (MAP) estimation to estimate P(y) and P( x i \y );
the former is then the relative frequency of class y in the training set. The Naive Bayes clas-
sifiers differ mainly by the assumptions they make regarding the distribution of P( x i \y )
in spite of their apparently over-simplified assumptions. Naive Bayes classifiers have worked
quite well in many real-world situations, particularly for document classification and spam
filtering. They require a small amount of training data to estimate the necessary parameters.
Naive Bayes learners and classifiers can be extremely fast compared with more sophisticated
methods. The decoupling of the class conditional feature distributions means that each dis-
tribution can be independently estimated as a one dimensional distribution. This in turn
helps to alleviate problems stemming from the curse of dimensionality.
The vibration signals were acquired for the various faults, like fault free condition, central
shaft fault, bearing fault condition and combined central shaft and bearing faults. A total of
250 data points were taken for each fault condition. Furthermore, the sample was split into
two equal parts with the first phase for training followed by testing. In each phase 125 signals
were taken for classification.
In section 4 and 4.1, feature extraction, feature selection has been discussed. Of the exist-
ing statistical eleven features were suggested for classification. However, the best ones were
selected for the following reasons:
To avoid unnecessary computation and poor results (dimensionality reduction).
To save time and build the robust model.
For the classification and validation Naive Bayes and ANN algorithm have been utilized
and the results were discussed. The effectiveness of the sample quality is understood by the
True Positive (TP) rate and False Positive (FP) rate in Table 4. A quality classification has a
value approaching ‘0’ for the false positive (FP) and for true positive the value has to be near
to 1. In Table 4 the TP value is very significant as it approaches 1 and the FP value is near
0, which highlights the quality of this classification. In addition, the classified data may be
exhibited in the form of a confusion matrix as indicated in Table 5.
The important features that are highly participative in deciding the various faults of the
SATR are standard deviation, skewness, standard variance, and standard error is vital in
deciding the faults. The standard deviation determines how much variability is sorted in a
coefficient estimate. A coefficient is significant if it is non-zero. Standard deviation is used
to measure the number of faults around and non-faulty conditions. The higher the standard
406
FFC 212 38 0 0
CSF 33 193 0 22
BFC 0 2 248 0
BFC & CSF 0 14 0 236
deviation value the larger the gap between faulty conditions and good condition. Skewness
measures the symmetry in the samples.
When the skewness value reaches zero, it indicates an error free condition. The critical fea-
tures selected for classification has shown a clear margin from each other, which substantiate
the selection of these features (Figure 3). The results of the 1,000 samples were reviewed with
the help of a confusion matrix. The understanding of the confusion matrix is essential before
exposing the same which is presented in Table 5.
The fault free condition (FFC) is represented in the first row, followed by central shaft
fault (CSF) subsequently the bearing fault condition (BFC) and the combined fault condi-
tion (CSF & BFC) was shown in the third and fourth rows in the table.
It was obvious from the confusion matrix (Table 5), that 250 samples were taken for the
different conditions of the self-aligning training roller. Those diagonal elements of the con-
fusion matrix speak about the effectively ordered information and the incorrectly classified
data points are positioned as non-diagonal elements. This is how the classification accuracies
are predicted from the confusion matrix.
In this case, 212 fault free condition (FFC) data has been effectively ordered and the
remaining 36 data demonstrate as central shaft fault (CSF). Similarly, 193 data points of cen-
tral shaft fault (CSF) have been effectively ordered and 38 ineffectively ordered as fault free
conditions. In this manner, the confusion matrix is inferred and the classification precision
obtained was 90%. These results are for a particular dataset hence, the classification accuracy
of 90% may guarantee a similar performance for all similar feature data. Furthermore it is
407
7 CONCLUSION
From the current analysis, it is evident that current research in a coal handling belt convey-
ors SATR has extensive scope for further application and examination. The prototype set-
up has been created with a high sensitive accelerometer of 10 Mv/g and a frequency range
of 0–2000 Hz (3dB) suitable for vibration monitoring. The accelerometer was hermetically
mounted at the self-aligning troughing roller’s stinger support (shown in Figure 2), which is
an ideal accelerometer mounting technique for vibration extraction. The vibration signals
were acquired using a data acquisition system. The acquired information has been preproc-
essed to extract those measurable features. The superlative features were distinguished utiliz-
ing the Naive Bayes algorithm and the faults were classified with them.
Since the information has been acquired in a particular working state, the end result may
be comprehensive for similar cases. Aiming at the shortcomings of the conventional failure
analysis of self-aligning troughing roller, the methodology adopted would definitely serve as
a guideline for the future research work on this area. However, the classification accuracy
of 90% is significant in this application. It can be concluded that the statistical features and
Naive Bayes algorithms are the best options for fault diagnosis of self-aligning troughing
roller in a bulk material handling belt conveyors.
REFERENCES
Hu, B., Sun, J., Wang, J., Dai, C. & Chang, Localization for sparse array using nonnegative sparse Baye-
sian learning. Signal Processing, 127, 37–43.
Kiaee, F., Sheikhzadeh, H. & Eftekhari Mahabadi, S. (2016). Sparse Bayesian mixed-effects extreme
learning machine, an approach for unobserved clustered heterogeneity. Neurocomputing 175, 411–420.
Magnant, C., Giremus, A., Grivel, E., Ratton, L. & Joseph, B. (2016). Bayesian non-parametric meth-
ods for dynamic state-noise covariance matrix estimation: Application to target tracking. Signal
Processing, 127, 135–150.
Mori, V. Mahalec, (2016). Inference in hybrid Bayesian networks with large discrete and continuous
domains. Expert Systems with Applications, 49, 1–19.
Muralidharan V., Sugumaran, V. & Sakthivel, N. (2014). Fault diagnosis of monoblock centrifugal
pump using stationary wavelet features and Bayes algorithm. Asian Journal of Science and Applied
Technology, 3, 1–4.
Muralidharan, V. & Sugumaran, V. (2017). A comparative study between Support Vector Machine
(SVM) and Extreme Learning Machine (ELM) for fault detection in pumps. Indian Journal of Sci-
ence and Technology, 9, 1–4.
Murru, N. & Rossini, R. (2016). A Bayesian approach for initialization of weights in backpropagation
neural net with application to character recognition. Neurocomputing, 193, 92–105.
Wang, R., Gao, L.-M. & Wang, (2016). Bayesian network classifiers based on Gaussian kernel density.
Expert Systems with Applications, 51, 207–217.
Wong, P.-K., Zhong, J., Yang, Z.-X. & Vong, C.-M. (2016). Sparse Bayesian extreme learning committee
machine for engine simultaneous fault diagnosis. Neurocomputing, 174, 331–343.
Zhang, J. Chen, Z. Cheng, P. & Huang, X. (2016). Multiple-measurement vector based implementation
for single-measurement vector sparse Bayesian learning with reduced complexity, Signal Processing,
118, 153–158.
408
Sirosh Prakash
Department of Naval Architecture and Shipbuilding Engineering, Sree Narayana Gurukulam College
of Engineering, Kolenchery, Ernakulam, India
K.K. Smitha
Department of Civil Engineering, Sree Narayana Gurukulam College of Engineering, Kolenchery,
Ernakulam, India
ABSTRACT: The design used in the present work is from the conceptual design of a
container ship and includes preliminary analysis of the structural design of the midship
section. The main objective of the work is to study the structural response of the midship
section to static loading. The present work is carried out in ANSYS, which is well-known
finite element modeling software. The first step is to produce the required model in ANSYS,
which we can either model in ANSYS or can import from any CAD software. For this
analysis, the model has been developed in ANSYS Parametric Design Language (APDL).
As it is a preliminary analysis, there are many assumptions, such as reducing the number
of girders and longitudinal stiffeners. The different static forces taken into account are the
hydrostatic pressure acting on the ship’s hull, the self-weight of structural members, and the
loads from containers. There are two approaches for load application: the load combination
method and the resultant force method. In the first method, we apply the forces acting on
each structural member. In the second method, we find the resultant forces and these are
applied on some respective structural members.
1 INTRODUCTION
Ships when exposed to sea will undergo different kind of forces, including deformation, so it
is necessary do a preliminary analysis to find the response of ship structures. Generally, there
are two types of loads acting on a ship, dynamic loads and static loads. Ship structural design
is a challenging task during the shipbuilding process. The structural design should fulfill two
main objectives. One is to design the ship structure to withstand the loads acting on it; the
other is to design the structural members economically.
The next step in structural design is the evaluation of loads, nature of loads, and so on. The
initial structural dimensions are fixed according to stress analysis of beams, plates and
the shell under hydrostatic pressure, bending and concentrated loads. The loads that strongly
affect the deformation of hull girders are hydrostatic pressure and cargo loads (Eyres, 1988;
Taggart, 1980; Souadji, 2012).
The modeling procedure using ANSYS software can be divided into three parts, that is, the
modeling of the side shell, the modeling of side girders, and the modeling of the bulkhead.
The following steps illustrate the process of model development:
411
3 MESHING IN ANSYS
In meshing, the surface and volumes are divided into a number of elements using nodes.
The accuracy of the meshed model varies according to the element selected (e.g. 10-node,
20-node and 4-node quad). The accuracy also depends on the type of meshing—coarse or
fine. A coarse mesh gives poor results, because the effect of continuity is reduced. Hence,
to obtain near-perfect results it is necessary to use a fine mesh. The mesh can be modified
using the Refine option. The mesh is refined with respect to the nodes, elements, area, and
volumes. It is refined near to the edges and the joints to obtain better results. Figure 1 shows
a meshed model of a structure.
412
Once the meshing is completed we can apply various loads on the structures of ship. The side
shell of the ship will be subjected to hydrostatic pressure. This pressure can be calculated using
the formula ρgh, where ρ is the density of water, g is the acceleration due to gravity, and h is the
height of the water. This can be applied as a pressure load to the bottom and side shell up to the
draft level. Using ANSYS, we can apply pressure loads, forces, moments, uniformly distributed
loads (UDLs), and so on. The boundary conditions can be applied by specifying the displace-
ments UX, UY and UZ, and the rotations RX, RY and RZ. The upper side of the bulkhead
will be under the loads from decks. These loads can be applied as a uniformly distributed load.
The forces are applied to selected elements by selecting the nodes, areas, and elements.
Analysis of a container ship carried out by Souadji (2012) has been taken as the basis for the
present analysis. The details of ship cross section are shown in Table 1 and Figure 2.
Structural analysis in ANSYS involves various steps, including modeling, meshing, load
application, obtaining the solution, and understanding the output. The various steps used for
the present structural analysis were as follows:
a. Assigning the type of elements used; for this work, shell type element SHELL 4 NODE
181 was used.
b. Assigning the material properties, the modulus of elasticity, Poisson’s ratio, and density.
Defining these properties helps the software to select the appropriate material.
c. Providing thickness value of the shell element. There is a separate library called Sections
to provide this thickness value.
d. In modeling, the coordinate values are provided. As the model becomes more complex,
the number of key points also increases. The key points are joined using lines, which will
give us the required area.
e. Once the modeling is completed the next process is to develop a meshed model. This is the
process in which the geometry is converted into a number of elements. Meshing is neces-
sary before the application of loads.
f. Various loads are then applied to the structure. The different static loads acting are the hydro-
static pressure, loads from containers, and the self-weight of structural members. The resultant
load is calculated and applied to the respective parts of the midship section. Load calculations:
i. Calculation of loads from containers:
Payload from one container = volume of the container × density of the mate-
rial stored = 0.454 × 80350 kg
No. of containers in a compartment = 70
Total load from containers = 2000 N/compartment
Total load including self-weight = 3000 N/compartment
ii. Hydrostatic pressure = 105581 kg/m2
413
6 DISCUSSION OF RESULTS
Figure 3 shows the deflected profile of the structure. The maximum value of deflection
is derived as 0.84 m. Due to the complexity of the structure and the high computational
time necessary to solve the model, some of the longitudinal members are omitted from the
ANSYS model. This reduces the stiffness of the modeled structure compared to the actual
physical structure and hence a higher value of deflection is obtained in the finite element
analysis. In order to obtain a value for deflection that is closer to that of the physical model,
the full structure must be modeled mathematically in ANSYS, which shows the importance
of precise modeling of physical structures. Figure 3 also shows the stress in the x direction.
The maximum stress value is 0.301 × 109 N/m2; this value occurs at one or two points where
the stress concentration comes into the picture. This can be avoided by refining the mesh near
the stress concentration points. The stresses in x, y and z directions can be obtained from the
ANSYS software.
414
7 CONCLUSION
Structural analysis of a midship section can be carried out using ANSYS finite element soft-
ware. The deflection and stresses in the x direction can be evaluated.
REFERENCES
Eyres, D.J. (1988). Ship construction (3rd ed., pp. 201–320). Oxford, UK: Butterworth-Heinemann.
Larsson, R. (1988). Ship structures—Basic course (MMA130). Gothenburg, Sweden: Department of
Applied Mechanics, Chalmers University of Technology. Retrieved from http://www.am.chalmers.
se/∼ragnar/ship_structures_home/lectures/L1.pdf.
Souadji, W. (2012). Structural design of a containership approximately 3100 TEU according to the con-
cept of general ship design B-178 (Master’s thesis, Western Pomeranian University of Technology,
Szczecin, Poland). Retrieved from http://m120.emship.eu/Documents/MasterThesis/2012/Wafaa%20
Souadji%20.pdf.
Taggart, R. (Ed.). (1980). Ship design and construction (pp. 130–224). New York, NY: The Society of
Naval Architects & Marine Engineers.
415
K. Sandeep
Department of Mechanical Engineering, Karunya University, Coimbatore, Tamil Nadu, India
M. Sekar
GMR Institute of Technology, Srikakulam, Andhra Pradesh, India
ABSTRACT: Lubricants with natural origin are known for their biodegradability and hence
are called biolubricants. This study examined the tribological, physical and chemical properties
of a biolubricant derived from Palm Kernel Oil (PKO). Zinc dialkyldithiophosphate (ZDDP)
is used as an additive and comparison of the properties of this newly developed oil with pure
PKO and SAE 20W40 engine oil were conducted. Friction and wear tests were performed in
a four-ball tribo tester as per ASTM D4172 standards. Test results reveal that pure PKO has
good tribological and physical properties (with the exception of its melting point) compared
to other pure vegetable oils. Modification of PKO with ZDDP made the results even better—
values of wear scar diameter and coefficient of friction were lower than for SAE 20W40 oil.
The melting point also reduced to 9°C, which can be further reduced by chemical modification.
1 INTRODUCTION
Lubricating oils are used in domestic and industrial processes to increase the life of machin-
ery. They also make the provision of energy easier and at lower cost. Growing consumption
of different lubricant types that are mostly mineral-based or synthetic leads to accidental
but unavoidable inflow of considerable quantities of non-biodegradable lubricants into the
environment. The increase in ecological concern inspires research in the lubricant industry
into raw materials from renewable sources.
Biodegradability is the ability of a substance to be decomposed by microorganisms. Veg-
etable oils have biodegradability of 97% to 99%, but that of mineral oils is only 20% to 40%,
according to Rudnick (2006). However, vegetable oils have failed to meet the demands of indus-
trial lubricants by not having acceptable physical and tribological properties. Researchers are
seeking methods and additives to improve these properties to an acceptable level so that there
can be considerable reductions in the discharge of non-biodegradable oils into the environment.
There are various methods of improving the tribological and physical properties of vegeta-
ble oils. Additives and chemical modification are the most frequently adopted methods. Vari-
ous classes of additives, such as extreme pressure additives, pour-point depressors, viscosity
modifiers, corrosion inhibitors, and nanoparticles, are used nowadays to improve such prop-
erties. One of the most commonly used additives is zinc dialkyldithiophosphate (ZDDP).
2 EXPERIMENTAL DETAILS
417
WSD
Viscosity
Oil name CoF Micrometer index
WSD. Azhari et al. (2015) explained the mechanism behind this as the reaction of ZDDP
with the metal surface to form a solid protective film and the reaction layer. When metal is
immersed in ZDDP solution in a lubricant or other non-polar solvent, a thermal film rapidly
forms at the metal surface.
The WSD and CoF values of pure PKO, PKO with additives, and SAE 20W40 oil are
shown in Figure 2 and Table 1.
Table 3. Thermal properties of servo oil, palm kernel oil, and palm kernel oil + 1.5% ZDDP.
Oil name °C °C °C °C
ester values of PKO and PKO + 1.5% ZDDP, which gave the best results in wear and friction
tests, are shown in Table 2.
4 CONCLUSION
Tribological tests show that palm kernel oil has better anti-wear and anti-friction properties
than SAE 20W40 oil. PKO samples with ZDDP additive resulted in minimum wear scar
diameter and coefficient of friction. The viscosity index was also comparable with commer-
cial 20W40 oil. The chemical properties of palm kernel oil are better than all other vegetable
420
REFERENCES
American Society for Testing Materials (ASTM). (1999). D4172-94: Standard Test Method for Wear
Preventive Characteristics of Lubricating Fluid (Four-Ball Method). West Conshohocken, PA: ASTM
International.
Azhari, M.A., Fathe’li, M.A., Aziz, N.S.A., Nadzri, M.S.M. & Yusuf, Y. (2015). A review on addition
of zinc dialkyldithiophosphate in vegetable oil as physical properties improver. ARPN Journal of
Engineering and Applied Sciences, 10(15), 6496–6500.
Azhari, M.A., Suffian, Q.N. & Nuri, N.R.M. (2014). The effect of zinc dialkyldithiophosphate addition
to corn oil in suppression of oxidation as enhancement for bio lubricants: A review. ARPN Journal
of Engineering and Applied Sciences, 9(9), 1447–1449.
Balamurugan, K., Kanagasabapathy, N. & Mayilsamy, K. (2010). Studies on soya bean based lubricant
for diesel engines. Journal of Scientific & Industrial Research, 69, 794–797.
Barnes, A.M., Bartle, K.D. & Thibon, V.R. (2001). A review of zinc dialkyldithiophosphates (ZDDPs):
Characterisation and role in the lubricating oil. Tribology International, 34, 389–395.
Dinda, S., Patwardhan, A.V., Goud, V. & Pradhan, N.C. (2008). Epoxidation of cotton seed oil by aque-
ous hydrogen peroxide catalyzed by liquid inorganic solids. Bio Resource Technology, 99, 3737–3744.
Erhan, S.Z., Sharma, B.K. & Perez, J.M. (2006). Oxidation and low temperature stability of vegetable
oil-based lubricants. Industrial Crops and Products, 24, 292–299.
Luna, F.M.T., Cavalcante, J.B., Silva, F.O.N. & Cavalcante, C.L., Jr. (2015). Studies on biodegradability
of bio-based lubricants. Tribology International, 92, 301–306.
Mahipal, D., Krishnanunni, P., Mohammed Rafeekh, P. & Jayadas N.H. (2014). Analysis of lubrication
properties of zinc-dialkyl-dithio-phosphate (ZDDP) additive on karanja oil (Pongamia pinnatta) as
a green lubricant. International Journal of Engineering Research, 3(8), 494–496.
Rudnick, L.R. (Ed.). (2006). Synthetics, mineral oils, and bio-based lubricants: Chemistry and technology.
Boca Raton, FL: CRC Press.
Zulkifli, N.W.M., Azman, S.S.N., Kalam, M.A., Masjuki, H.H., Yunus, R. & Gulzar, M. (2014). Lubric-
ity of bio-based lubricant derived from different chemically modified fatty acid methyl ester. Tribol-
ogy International, 93, 555–562.
421
S. Thanigaiarasu
Department of Aerospace Engineering, MIT Campus, Anna University, Chennai, India
S. Elangovan
Department of Aerospace Engineering, Bharath University, Chennai, India
ABSTRACT: An experimental analysis has been carried out to examine the mixing encour-
aging effectiveness of tabs with circular perforation of different perforation diameters,
1.5 mm, 2 mm and 2.5 mm. The geometrical blockage offered by the perforated tabs, placed
diametrically opposite, at the nozzle exit, were 8.42%, 7.55% and 6.42%, respectively, for per-
foration diameters 1.5 mm, 2 mm and 2.5 mm. The Mach along the jet central axis and Mach
profiles in the directions along and tangential to the tabs were calculated at various axial
locations. The results of the Mach 0.4, 0.6 and 0.8 jets studied show that the tab with 2 mm
perforation is a better mixing promoter than 1.5 mm and 2.5 mm perforations, effecting in a
core length decrement of 62% for Mach 0.6 jet. The corresponding reduction in core length
for 1.5 mm and 2.5 mm holes are only 47.61% and 42% respectively.
NOMENCLATURE
1 INTRODUCTION
Restricting of jet has become an active area of analysis owing to its application potential,
such as improvement of stealth capabilities, minimization of base heating and reduction of
aero acoustic,. Most of these applications require mixing improvement of the jet, i.e. the
mass from the adjoining region entrained by the jet has to be mixed with the jet fluid mass as
rapid as possible (Reeder & Zaman, 1996).
To achieve mixing enhancement, mixing promoting small scale vortices needs to be origi-
nate and commence at the nozzle exit (Reeder & Samimy, 1996), with this aim considerable
number of passive and active techniques have been identified by the researchers over the
recent few decades (Rathakrishnan, 2010). Among these, passive control in the form of tab
423
2 EXPERIMENTAL DETAILS
The experiments were carried out in the High Speed Jet Laboratory at MIT, Anna University,
Chennai. The test facility consists of an air delivering system (compressors and reservoir)
and an open jet testing facility as shown in Figure 1.
425
Figure 4. Physical mechanism behind vortex generation between perforation and the tab.
426
vortices emitted by web and perforation. Also it is observed that decay in transition region is
very rapid beyond 11D compared with uncontrolled jet. This is due to the fact that small scale
vortices are stronger than large scale vortices and travels longer distance 20D.
Figure 6 depicts the variation in the potential core area at Mach 0.6 for uncontrolled jet,
tabs with 8.42% blockage (1.5 mm circular perforation), 7.55% blockage (2 mm circular per-
foration) and 6.42% blockage (2.5 mm circular perforation). Potential core for uncontrolled
jet extends almost up to X/D = 5.25. The core span extends to X/D = 2 for the tab with
7.55% blockage (2 mm circular perforation), whereas core length extends to X/D = 2.75 and
X/D = 2.78 for the tab arrangements of 8.42% blockage (1.5 mm circular perforation) and
6.42% blockage (2.5 mm circular perforation), respectively. The tab with 7.55% blockage
(2 mm circular perforation) is more effectual for mixing enrichment than other perforation
tab configurations studied.
This perhaps may be because of the varying radii of curvature created by perforation inside
the tab, which is responsible for manipulating the dimensions of vortices (Quinn 1995).
427
428
429
perforation), whereas it is about Z/D = 0.7 and Z/D = 0.6 for tabs with 8.42% blockage
(1.5 mm circular perforation) and 6.42% blockage (2.5 mm circular perforation) respectively,
at axial position X/D = 0.15 and 0.25. This shows that the tab with 7.55% blockage (2 mm cir-
cular perforation) alters jet cross-sectional area more effectively and entrains more air from
the nearby compared to other perforated tabs studied.
From Figure 10, it is perceived that the mixing enhancement performance in XY-plane is
better for tab with 7.55% blockage (2 mm circular perforation) due to more shrunk in area.
Figure 12 proves that the jet develops faster along Z/D for tab with 7.55% blockage (2 mm
circular perforation) at X/D = 0.15, for Mach 0.8. This may be connected to the drift of small
eddies sidewise from the nozzle outlet.
430
4 CONCLUSION
The outcome of the current study on subsonic jet control with perforated tabs show that all
the perforations studied are profound and effective in promoting mixing close to jet field.
However, among them, perforation of 2 mm diameter is a highly effective stirring promoter,
resulting in a core span reduction of 62%. The corresponding core span reduction for 1.5 mm
and 2.5 mm holes are only around 47%.
REFERENCES
Arun Kumar, P. & Rathakrishnan, E. (2013a). Corrugated triangular tabs for supersonic jet control.
Journal of Aerospace Engineering, 1–15.
Arun Kumar, P. & Rathakrishnan, E. (2013b). Corrugated truncated triangular tabs for supersonic jet
control. Journal of Fluid Mechanics, 135, 1–11.
Arun Kumar, P. & Rathakrishnan, E. (2013c). Truncated triangular tabs for supersonic—jet control.
Journal of Propulsive Power, 29, 50–65.
Bohl, D. & Foss, J.F. (1996). Enhancement of passive mixing tabs by the addition of secondary tabs
AIAA paper, 96–054.
Bradbury, L.J.S. & Khadem, A.H. (1975). The distortion of a jet by tabs. Journal of Fluid Mechanics,
70 (4), 801–813.
431
432
1 INTRODUCTION
The sun is the head of the family of planets, and is the most abundant source of renewable
energy for our earth. Reserves of other energy sources, such as coal and fossil fuel, will even-
tually diminish. Solar energy contains radiant heat and light energy from the sun, which can
be harnessed with modern technologies like photovoltaic (PV) cells, solar heating, artificial
photosynthesis, and solar thermal electricity. Solar thermal collectors gain energy through
radiation, conduction and convection. A flat plate collector will lose energy through conduc-
tion as well as convection; thus, it reduces the amount of energy that can be transferred to
working fluid. The evacuated tube collector is a new harnessing technology. It is ideal for
high-temperature applications such as boiling water, pre-heating, and steam production.
Exergy is the ability of a system to do useful work before it has been brought into thermal,
mechanical and chemical equilibrium with the environment. It is derived from both the first
and second laws of thermodynamics. When a system and its surroundings are not in equilib-
rium with each other, then we can extract work. This means that if there is any difference in
temperature between a system and its surroundings, it will be in unstable equilibrium. This
situation can be used to produce work. On the basis of the second law of thermodynamics, it
is impossible to convert low-grade energy completely into shaft work. The part of low-grade
energy that can be converted into useful work is termed available energy or exergy. The per-
formance of Parabolic Trough Collectors (PTCs) can be explained in terms of exergy, which
provides a useful basis for the design and optimization of PTCs.
The present work includes modification of an existing PTC with substitution of an evacuated
tube. The main drawback of the existing system is that the outside surface of the receiver tube
is exposed to the atmosphere; thus convective energy loss will be dominant, which reduces the
performance of the PTC. In order to reduce this loss, evacuated tubes are introduced instead of
copper tubes. Exergy analysis was performed to assess the performance of the modified PTC.
435
The performance of a PTC can be estimated by the efficiency factor η, which is defined as
the ratio of the net heat gain to the solar radiation energy, based on the diffuse reflection area
of the solar collector:
p (Tout − Tin )
mC
η=
I ( A a − As ) ρ
436
Parameter Value
dEcv
dt
= ∑ E
j
qj − W cv + ∑ m i e fi − ∑ m e e fe − E d − E loss
i e
It is assumed that steady state flow, and kinetic and potential energy were negligible, and
that the specific heat and other properties remain constant during operation.
Exergy efficiency is defined as the ratio of exergy gain to maximum possible solar radia-
tion exergy, that is:
E E − E
ηex = gain = i e
E sr E sr
Te Te C ( T )
m ∫ C p ( T) dT − To ∫ p
dT
Ti Ti T
ηex = (3)
I b ( A a − As )ρψ
T
m C p (Te − Ti ) − To ln e
Ti
ηex =
I b ( A a − As )ρψ
where ψ is the relative potential of the maximum useful work extracted from radiation and is
calculated with Petela’s (2003) formula:
4
4T 1T
ψ = 1− o + o
3 Ts 3 Ts
3 EXPERIMENTAL FACILITY
In order to avoid confusion in terminology and ensure consistency between terms we use
‘concentrating collector’ to represent the entire setup. The term ‘concentrator’ is used for the
optical subsystem that directs the solar radiation onto the absorber, and the term ‘receiver’
represents the subsystem consisting of the absorber, coating and cover, as shown in Figure 2.
437
The experiment was carried out during February to March 2017 at the Government Engi-
neering College, Kozhikode (11.2858° N, 75.7703° E), between the hours of 9 a.m. and
3 p.m. The experiment was conducted in differing climatic situations, such as cloudy and
sunny skies. Further, the study was carried out in different solar intensities, varying from 600
to 1000 W/m2. The efficiency of the PTC was calculated by using Equation 1.
438
Figure 4. Variation of outlet temperature with mass flow rate (for constant 1000 W/m2 solar
intensity).
Figure 5. Variation of efficiency with mass flow rate (for constant 1000 W/m2 solar intensity).
439
Figure 7. Variation of outlet temperature and efficiency with time (for constant m
= 0.0025 kg/s ) .
calculated using Equation 1. The evacuated tube collector shows the greatest efficiency,
nearly twice that of the copper tube collector with Al2O3 nanofluid as the working fluid. The
maximum efficiencies obtained were 58.3%, 29.3%, 18.3%, and 11% for evacuated tube, cop-
per tube with nanofluid, copper tube, and aluminum tube, respectively.
440
5 CONCLUSIONS
Some modifications were performed in the design of a PTC and experiments were conducted
with different mass flow rates, various climates and different solar intensities to compare its
performance. The study demonstrated the effectiveness of the proposed modification to the
design of the PTC. Exergy analysis was performed in order to establish various parameters
that affect the performance of the PTC. This was a cost-effective way of obtaining high tem-
peratures in response to future energy demands. The following conclusions were drawn upon
the completion of the work:
• The maximum temperature obtained was 143°C, which is 33°C (∼30%) higher than that of
the previous model using copper tube with water/Al2O3 nanoparticles.
• The maximum temperature obtained for the previous model without using nanoparticles
was 96°C, which is 49% lower than that of the new model.
441
NOMENCLATURE
Aa Aperture area m2
As Shaded area m2
C Concentration ratio
D Diameter m
UL Overall heat loss coefficient W/m2 K
W Aperture m
E e Exergy output W
E i Exergy input W
m Mass flow rate kg/s
η Energy efficiency
ηex Exergy efficiency
θα Acceptance angle degree
ρ Reflectivity
σ Stefan–Boltzmann constant W/m2/K4
Subscripts
a Atmosphere
r Receiver
REFERENCES
Jafarkazemi, F. & Ahmadifard, E. (2013). Energetic and exergetic evaluation of flat plate solar collec-
tors. Renewable Energy, 56, 55–63.
Nikhil, M.C. & Sreejith, B. (2016). Performance evaluation of a modified parabolic trough concentrator
using nanofluid as the working fluid. NET, 192–201.
Padilla, R.V., Fontalvo, A., Demirkaya, G., Martinez, A. & Quiroga, A.G. (2014). Exergy analysis of
parabolic trough solar receiver. Applied Thermal Engineering, 67, 579–586.
Petela, R. (2003). Exergy of undiluted thermal radiation. Solar Energy, 74, 469–488.
Tyagi, S.K., Wang, S., Singhal, M.K., Kaushik, S.C. & Park, S.R. (2007). Exergy analysis and para-
metric study of concentrating type solar collectors. International Journal of Thermal Sciences, 46,
1304–1310.
Yadav, A., Kumar, M. & Balram. (2013). Experimental study and analysis of parabolic trough collector
with various reflectors. International Journal of Energy and Power Engineering, 7(12), 1659–1663.
442
ABSTRACT: The principal objective of the present work is to compute the GWP, ODP,
RF number and TEWI analysis of various ternary R134a/R1270/R290 blends as alternatives
to R22. In this study thirteen refrigerant blends consists of R134a, R1270 and R290 at dif-
ferent compositions are taken. GWP and ODP of refrigerant blends are computed by using
various simple correlations. The estimation of emission of greenhouse gases and flammabil-
ity study of refrigerants are done by using TEWI and RF analysis respectively. Analytical
results revealed that all the thirteen studied fluids are ozone friendly in nature. The GWP of
refrigerant M6 (651) is lower than that of GWP of R22 (1760). RF analysis exhibited that
all the thirteen refrigerant blends are categorized as ASHRAE A2 flammability category.
Thermodynamic analysis revealed that COP of M6 (3.608) is higher that of COP of R22
(3.534). TEWI of M6 is lower among the R22 and thirteen studied fluids. Hence refrigerant
M6 (R134a/R1270/R290 50/5/45 by mass %) is an alternative to R22.
NOMENCLATURE
HFCs Hydrofluorocarbons
HOC Heat of combustion (kJ/mol)
L Lower flammability limit (kg/m3)
MW Molecular weight (kg/kmol)
RF Refrigerant flammability (kJ/g)
U Upper flammability limit (kg/m3)
Ci Composition of ith component
m Mass flow rate of refrigerant (kg/min)
mi Mass fraction of ith component
WC Compressor work (kJ/kg)
WCP Compressor power (kW)
1 INTRODUCTION
Refrigerant R22 has adverse environmental impacts like high ozone depletion potential
(ODP) and high global warming potential (GWP) (Mohanraj et al. 2009). Therefore an
international Montreal protocol decided to phase out R22 by the year 2030 (UNEP 1987).
Currently global warming has become very significant issue and hence Kyoto protocol was
recommended to resolve this problem, for which hydrofluorocarbons (HFCs) were classified
as one among the targeted global warming refrigerants (GECR 1997). Hence in this study
an attempt was made to develop the refrigerants which will meet the requirements of both
the Montreal and Kyoto protocol respectively. Formerly various performance studies were
carried out to find suitable alternative for the refrigerant R22. Theoretical analysis revealed
that R444B was a suitable candidate to replace R22 (Atilla G.D. & Vedat O. 2015). Experi-
mental studies reported that R134a requires a larger size of compressor in order to replace
443
GWP and ODP values of pure refrigerants (R22, R134a, R1270 and R290) are required to
compute the GWP and ODP of various refrigerant blends. The values of GWP and ODP
are taken from the literature and they are listed in Table 1 (ASHRAE 2009, IPCC 2014). In
the present study total thirteen ternary refrigerant blends (R134a/R1270/R290) at various
compositions are considered and their corresponding designation followed for the blends are
given in Table 1. The correlations used to compute the GWP and ODP of various refrigerant
blends are taken from literature and they are given below (Ahamed J.U. 2014, Arora R.C.
2010).
The GWP and ODP of thirteen investigated blends are computed by using equations (1)
and (2) respectively. The values of GWP and ODP of thirteen studied fluids are also given
in Table 1.
444
Study of flammability is most crucial for the investigators while developing alternative refrig-
erants from the view point of safety. Flammability gives the range of fuel concentration
within which the refrigerant blends can burn or ignite. These limits are important while com-
puting the hazards of liquid or gaseous fuel mixtures. Hence Jones proposed the correla-
tion to estimate the upper and lower flammability of gases and vapors (Jones, G.W. 1938,
Zabetakis, M.G. 1965). An index named refrigerant flammability (RF) number is used for
indicating the hazards of combustion of the refrigerants. It is reliable to express the haz-
ards of combustion with respect to limits of flammability of each refrigerant by using RF
number. An empirical correlation used for computing the RF number is given below (Shigeo
Kondo et al. 2002).
U 1/ 2 HOC
RF = − 1 × (3)
L MW
To compute the limits of upper and lower flammability of refrigerant blends, Le Chat-
erier’s rule can be used (Shigeo Kondo et al. 2002).
1 C C C
= 1 + 2 +…= ∑ i (4)
U mix U1 U 2 Ui
1 C C C
= 1 + 2 +…= ∑ i (5)
Lmix L1 L2 Li
From the above literature RF number of R290 and R1270 are 52.2 and 62.1 kJ/g respec-
tively. Flammability limits of R22 and R134a are not available in literature and hence Jones
correlations can be used to compute the flammability limits of R22 and R134a. However
from ASHRAE design safety standard 34, the refrigerants R22 and R134a are classified as
nonflammable ASHRAE A1 category (ASHRAE 34 2007). Based on RF number, refriger-
ants are classified into various groups. If RF number is below 30 then it is considered as
slightly flammable (ASHRAE A2) group and in between 30 to 150 classified as flammable
(ASHRAE A3) group. To compute the RF number of thirteen R134a/R1270/R290 blends
equations from (3 to 5) is used and corresponding values are shown in Table 2.
M1 14.90 A2*
M2 20.52 A2*
M3 18.35 A2*
M4 19.79 A2*
M5 21.35 A2*
M6 23.05 A2*
M7 19.57 A2*
M8 21.29 A2*
M9 23.14 A2*
M10 25.15 A2*
M11 23.55 A2*
M12 23.81 A2*
M13 24.07 A2*
*Computed values.
445
were GWP100 = GWP of a given fluid for a time period of 100 years, m = Charge of the given
fluid (kg), L = Leakage rate of the refrigerant (%), SLife = Service lifetime of the device (years),
EAn = Energy consumption per annual (kWh), C = Indirect emission factor (kg CO2/kWh).
R22 21586
M1 21164
M2 21037
M3 20499
M4 20290
M5 20084
M6 19925
M7 21809
M8 21554
M9 21558
M10 21551
M11 20312
M12 20506
M13 20701
446
447
Referring to Figure 3 it is observed that TEWI of M6 is lower among the R22 and thirteen
studied refrigerant blends. This is due to lower compressor power and lower global warming
potential of M6.
6 CONCLUSIONS
REFERENCES
Ahamed, J.U., Saidur. R, & Masjuki M.M. 2014. Investigation of Environmental and Heat Transfer
Analysis of Air Conditioner Using Hydrocarbon Mixture Compared to R22. Arabian Journal for
Science and Engineering, 39: 4141–4150.
AIRAH. 2012. Methods of calculating Total Equivalent Warming Impact Best Practice Guide lines:
2–20.
ANSI/ASHRAE 2007. Standard 34. Designation and Safety Classification of Refrigerants.
Arora R.C. 2010. Refrigeration and Air conditioning. New Delhi: PHI learning Private Limited.
ASHRAE. 2009. Handbook Fundamentals (SI) chapter 29. Refrigerants: 29.1–29.10.
Atilla Gencer Devecioğlu & Vedat Oruça. 2015. Characteristics of Some New Generation Refrigerants
with Low GWP. Journal of Energy Procedia, 75: 1452−1457.
Chen, W. 2008. A comparative study on the performance and environmental characteristics of R410 A
and R22 residential air conditioners. Applied Thermal Engineering, 28: 1–7.
Devotta, S., Waghmare, A.V., Sawant, N.N., & Domkundwar, B.M. 2001. Alternatives to HCFC-22 for
air conditioners. Applied Thermal Engineering, 21: 703–715.
Global Environmental Change Report, 1997. A brief analysis of the Kyoto protocol. 9(24).
IPCC. 2014. Fifth Assessment Report chapter 8. Anthropogenic and Natural Radiative Forcing:
659–740.
Jones, G.W. 1938. Inflammation Limits and Their Practical Application in Hazardous Industrial Opera-
tions. Chemical. Reviews, 22, 1–26.
Lorentzen, G. 1995. The use of natural refrigerants: a complete solution to the CFC/HCFC predica-
ment. International Journal of Refrigeration, 18 (3): 190–197.
Mohanraj, M., Jayaraj, S. & Muraleedharan, C. 2009. Environment friendly alternatives to halogenated
refrigerants-A review. International Journal of Greenhouse Gas Control, 3(1): 108–119.
Sharmas Vali, S. & Ashok Babu T.P. 2017. Theoretical Performance Investigation of Vapour Compres-
sion Refrigeration System Using HFC and HC Refrigerant Mixtures as Alternatives to Replace R22.
Journal of Energy Procedia, 109: 235–242.
Shigeo Kondo, Akifumi Takahashi, Kazuaki Tokuhashi, Akira Sekiya. 2002. RF number as a new index
for assessing combustion hazard of flammable gases. Journal of Hazardous Materials, A93: 259–267.
Thomas W.D. & Ottone C. 2004. A low carbon, low TEWI refrigeration system design. Applied Ther-
mal Engineering 24: 1119–1128.
United Nations Environmental Programme, (1987). Montreal Protocol on substances that deplete the
ozone layer, Final act. New York: United Nations.
Vincenzo L.R. & Giuseppe P. 2011. Experimental performance evaluation of a vapour compression
refrigerating plant when replacing R22 with alternative refrigerants. Applied Energy 88: 2809–2815.
Zabetakis, M.G. 1965. Flammability Characteristics of Combustible Gases and Vapors. Bulletin 627,
US Bureau of Mines.
449
Sajith Gopi
Kerala Water Authority, Thrissur, Kerala, India
ABSTRACT: A PVT system combines a solar cell, which produces electricity with a solar
thermal collector, which extracts thermal energy from the sun so that it can extract both ener-
gies simultaneously. A solar panel is capable of achieving a maximum electrical efficiency of 15
to 25%, while a PVT hybrid collector produces combined energy efficiency in the range of 55 to
70%. This ability of the PVT collector to trap a large amount of the sun’s energy makes them
superior over the conventional solar panels. This study is aimed to analyse the performance of
a PVT collector under a solar simulator. A 100 watt solar panel is used for the construction of
the PVT collector. Thermal energy is extracted by water flowing through the system, which is in
direct contact with the rear side of the PV panel. Water enters through the inlet pipe, heats up
by absorbing thermal energy from the panel and leaves through the exit pipe. The experiment
was conducted for three light intensities of 600, 800 and 1000 W/m2. The performance analysis
of the PVT system shown that, these systems are four to five times more efficient than normal
PV systems working at outdoor conditions. The results also shown that, the increase in PV
panel temperature results a reduction in electrical efficiency, while the overall efficiency of PVT
system remained almost maximum at all temperatures for a particular light intensity.
1 INTRODUCTION
In this mechanized world, life will be very easy if sufficient amount of energy is available
for us. Every automated system exploits energy to give the desired work output. Therefore,
if energy is not available, every machine is a waste. The overexploitation of non-renewable
energy sources such as fossil fuels will lead to energy crisis. It’s our duty to conserve the avail-
able energy for our upcoming generations by depending on renewable energy sources and
also by utilizing them in an efficient way. While considering the renewable energy sources,
solar energy is considered to be the best among them. This is because, it is sufficiently avail-
able and its tapping is cheaper and easy compared to other energy forms. A photovoltaic
thermal (PVT) system is used to extract energy from sun to a maximum extend. The PV
module will generate electrical energy and the heat developed on the PV module is absorbed
by water or air flowing through the system.
APVT system will bring the advantage of both solar thermal collector and PV panel to
extract thermal as well as electrical energy. This make PVT panels higher efficient when com-
pared to a solar thermal collector or a PV panel individually. A PVT system is an integration
of a PV panel module with an air or water heating system. It produces electrical energy along
with thermal energy. A PVT system consists of a PV panel below which a heat transfer fluid
(air or water) will be flowing to extract the thermal energy from the panel. Since the heat
generated on the PV module is taken away by the heat transfer fluid (HTF), the operating
temperature of the panel will be kept lower than normal conditions so that the panel effi-
ciency is improved.
451
2 EXPERIMENTAL SETUP
A photovoltaic thermal system is developed with water as the heat transfer fluid. The per-
formance evaluation of the developed system is conducted with the help of indoor simulation
system.
The experimental setup for the testing of PVT system is shown in Figure 1. It consists of
a PVT system, an indoor solar simulator, voltage, current and temperature measuring setup
and a water circulating system consisting of a pump and storage tank.
The PVT system consists of a PV panel of 100 Wand an attachment to the rear side of the
panel to extract thermal energy from it. Energy extraction is achieved by creating a passage
452
453
3 DATA REDUCTION
3.1 PV system
To analyze a PV system, it is very simple compared to a PVT system as it includes only electri-
cal energy output. The power output at the maximum power point is used to find the electrical
efficiency of the PV system. The equation for electrical efficiency of the system is given by,
where, Imp and Vmp are the current and voltage at the maximum power point, G is the inci-
dent solar irradiance normal to surface and A is the collector aperture area.
(Tout − Tin)
Thermal energyoutput mC
ηt = = (2)
Energyin G×A
.
where, m is the mass flow rate of the water through the PVT collector, Tin and Tout are the
temperatures of water at inlet and outlet of PVT panel. The overall performance of the PVT
system is given by the sum of electrical and thermal efficiency.
4 RESULTS
The PVT and PV systems were tested using indoor solar simulation and the results obtained
at different illuminations are plotted below.
454
35 6.78 65.51
40 6.75 65.50
45 6.67 65.49
50 6.51 65.43
55 6.36 65.39
60 6.22 –
65 6.05 –
70 5.87 –
455
35 6.86 61.39
40 6.55 61.08
45 6.38 60.92
50 6.24 60.77
55 6.16 60.69
60 5.99 –
65 5.82 –
70 5.64 –
The performance of PV and PVT system at 1000 W/m2 is shown in Figure 7 and Figure 8
respectively.
It is clear from figures that the performance of both systems shown a similar trend as of
800 W/m2 and 600 W/m2. There was a reduction of 16.95% in relative performance of the PV
system for a temperature rise of 34°C. In the case of PVT system, the reduction is only 1.17%
of available energy output for a temperature rise of 19°C.
PV and PVT efficiency at different panel temperature is tabulated below for 1000
W/m2 illumination. In this case also, the PV efficiency is significantly affected by temperature
whereas PVT efficiency is not affected much.
From the performance studies of PV and PVT systems at three different light intensities,
it is clear that the PV efficiency increases with increase in light intensity. But the temperature
increase of PV module resulted in reduction of efficiency for all the three light intensities.
In the case of PVT system, the overall efficiency is showing slight decrease with increase
in light intensity. It is because, the corresponding rate of increase in thermal energy is less
compared to rate of increase in light energy obtained with the halogen lamp simulator, when
luminance is increased from 600 W/m2 to 800 W/m2 and then to 1000 W/m2.
Even the PV panel temperature reached up to 55ºC, the temperature attained for water
is only 50ºC to 51ºC only. This is due to the heat transfer loss from PV panel to water.
456
35 6.92 59.77
40 6.82 59.63
45 6.66 59.48
50 6.54 59.30
55 6.30 59.07
60 6.09 –
65 5.91 –
70 5.75 –
600 W/m2 0.2143 51
800 W/m2 0.2609 50
1000 W/m2 0.3158 50
The Table 4 will show the flow rates and outlet temperatures of water obtained at three dif-
ferent light intensities for indoor testing.
5 CONCLUSIONS
Performance of the developed PVT system was studied in indoor condition for light inten-
sities of 600, 800 and 1000 W/m2. The variation of electrical efficiency of PV system and
overall efficiency of PVT system were analysed at all three intensities.
457
NOMENCLATURE
A Collector aperture area m2
C Specific heat of water J/Kg-K
G Incident sola rirradiance W/m2
Imp Maximum powerpoint current A
Isc
. Short circuit current A
m Mass flow rate of water Kg/s
Pmax Maximum Power W
Tin Inlet temperature of water °C
Tout Outlet temperature of water °C
Vmp Maximum power point voltage V
Voc Open circuit voltage V
ηe Electrical efficiency %
ηpower Electrical power generation efficiency %
ηsaving Energy saving efficiency %
ηt Thermal efficiency %
REFERENCES
Adnan Ibrahim & Goh Li Jin. 2009. Hybrid Photovoltaic Thermal (PV/T) Air and Water Based Solar
Collectors Suitable for Building Integrated Applications. American Journal of Environmental Sci-
ences. 5: 614–624.
Al Harbi, Y., Eugenio, N.N. & Al Zahrani, S. 1998. Photovoltaic-thermal solar energy experiment in
Saudi Arabia. Renewable Energy. 15: 483–486.
Bahaidarah, H., Abdul Subhan., Gandhidasan., & Rehman, S. 2013. Performance evaluation of a PV (pho-
tovoltaic) module by back surface water cooling for hot climatic conditions. Energy. 59: 445–453.
Chegaar, M., Hamzaoui, A., Namoda, A., Petit, P., Aillerie, M., & Herguth, A. 2013. Effect of illumi-
nation intensity on solar cells parameters. Advancements in Renewable Energy and Clean Environ-
ment, Energy Procedia. 36: 722–729.
Chow, T.T. 2010. A review on photovoltaic/thermal hybrid solar technology. Applied Energy. 87: 365–379.
Garg, H. & Agarwal, R. 1995. Some aspects of a PV/T collector/forced circulation at plate solar water
heater with solar cells. EnergyConverse Management. 36: 87–99.
Garg, H.P. & Adhikari, R.S. 1997. Conventional hybrid photovoltaic/thermal (PV/T) air heating collec-
tors: steady state simulation, Renewable Energy. 11: 363–85.
Huang, B., Lin, T., Hung, W. & Sun, F. 2001. Performance evaluation of solar photovoltaic/thermal
systems. Solar Energy. 70: 443–448.
Kumar, K., Sharma, S.D. & Jain, L. 2007. Standalone Photovoltaic (PV) Module Outdoor Testing
Facility for UAE Climate. CSEM-UAE Innovation Center LLC 2007.
Solanki, S.C., Swapnil Dubey & Arvind Tiwari. 2009. Indoor simulation and testing of photovoltaic
thermal (PV/T) air collectors. Centre for Energy Studies, Applied Energy, IIT Delhi. 86: 2421–2428.
Xingxing Zhanga., Xudong Zhaoa., Stefan Smitha., Jihuan Xub. & XiaotongYuc. 2012. Review of
R&D progress and practical application of the solar photovoltaic/thermal (PV/T) technologies.
Renewable and Sustainable Energy Reviews. 16: 599–617.
458
ABSTRACT: Biodiesel is a renewable energy source and an alternate fuel for compres-
sion ignition engines as an alternative for diesel. Biofuel satisfies the physical and chemical
standards of diesel. Hence it can be used as an alternative for diesel in compression ignition
engines. Compared with normal diesel, biodiesels cause less pollution. The main objective
of this experimental study is to compare the various properties of pure diesel, fish biodie-
sel, coconut testa biodiesel, coconut testa Biodiesel-Ethanol-Diesel blend (BED), which are
made up of 5 vol% of ethanol and 10 vol% of biodiesel to 85 vol% diesel fuels, and coconut
testa Biodiesel-Ethanol blend (BE), which are made up of 5% ethanol and 95% biodiesel.
Properties such as flash-fire point, density, viscosity, Acid Value (AV), Saponification Value
(SV), Iodine Value (IV), and calorific value of different biodiesels were analyzed.
1 INTRODUCTION
Recent development in the applications of alternative fuels for compression ignition engines have
attracted attention in the automobile domain due to the depletion of fossil fuels and increasing
air pollution problems caused by the emissions from various engines. The plastic pyrolysis oil
is another alternative fuel for some engine application in certain operation conditions. Biodiesel
consists of a mixture of ethyl or methyl esters of fatty acids derived from vegetable oils or animal
fats, which are obtained from the transesterification reaction with short-chain alcohol, methanol
or ethanol, respectively and in the presence of a catalyst (Parente, 2003). The properties, namely
density, viscosity, flash point and fire point of fish oil biodiesel are higher and the calorific value
is 0.92 times that of diesel (Shivraj et al., 2014). Flash and fire points increase with an increase in
the amount of biodiesel in the blend. The cetane number of Fish Oil Biodiesel (FOB) is higher;
this ensures the complete combustion of FOB. The calorific value of B100 is less and increases
with the increase in the amount of diesel fuel in the blend, and the flash and fire points also
increase with increase in the amount of biodiesel in the blend (Pavan & Venkanna, 2014).
Fuel stability related properties, acid value and iodine value of testa biodiesel is within the
range, which shows it has good storage stability (Swaroop et al., 2016). The best proposed
solution for reducing diesel engine pollutants is using biofuels that consist of a combina-
tion of diesel, biodiesel and ethanol (Hoseini, 2017). A mixture of biodiesel-diesel-ethanol
blend is utilized to increase the poor cold-flow properties of biodiesel as the cetane number
and lubricity of ethanol-diesel blends is too low (Hatkard et al., 2015). The blended fuels
reduced PM emissions, while increased NO x emissions, but reduce smoke and CO emissions
(Çelikten, 2011). Using ethanol as fuel or a fuel additive in diesel engines is limited by their
miscibility problems with diesel fuel. Other problems are low their cetane number, low lubric-
ity and reduced heating value (Altun et al., 2011). In comparison with the diesel fuel, biodie-
sel blends produced lower sound levels due to many factors, including an increase in oxygen
content, reduction in the ignition delay, higher viscosity and lubricity (Liaquat et al., 2013).
The use of different vegetable oils affects production processes and costs, and the resulting
459
3 EXPERIMENTAL PROCEDURES
3.1 Density
Density was measured using the standard method (BIS, 1972). A capillary stopper relative
density bottle of 50 ml capacity was used to determine the density of the biodiesel. Density
was calculated using the following equation.
W3 − W1
Density = × ρ H 2O (1)
W2 − W1
Iodinevalue =
( B − S ) × N ×12.69 (2)
W
N × V × 56.1
Acid value = (3)
W
N = Normality = 0.1
V = Volume of NAOH required in mL
W = Weight of sample in g
N × ( b − s ) × 56.1
Saponification Value = (4)
W
461
4.1 Density
The density of different fuels is shown in Figure 1 and it is different for each fuel sample. The
value of density of BED blend is 828.37 kg/m3, and the density of testa biodiesel is 832.3 kg/m3,
both of which are comparable with the density of diesel (833 kg/m3). The density of BE blend
is 875 kg/m3 and for fish biodiesel the density is found to be 787.5 kg/m3. All these values
satisfy ASTM standards (575–900 kg/m3).
462
463
464
The proper constant quality of biodiesel can only be promising by analyses the biodiesel
quality standards like ASTM, EN and BIS. To attain this aim it is very important to check
the quality throughout the biodiesel production, such as trans-esterification, emulsification
or any other production techniques, and from the feedstock to the distribution units. The
physical and chemical properties of different biodiesels are mainly influenced by the com-
position of feedstock used in their production process, and the nature of the feedstock and
its storage conditions, such as air, sunlight and humidity. Furthermore, different area mar-
kets need different quality requirements. The main differences are found in viscosity, iodine
value, density and acid value. Various other reasons encountered for these variations are
the performance describing properties at very low temperature, such as density at 15°C, as
per ASTM standards, and the exposed conditions. It is not possible to devise a formula for
standards of biodiesels due to these differences. This would be a major disruption for both
biodiesel imports and exports among different countries of the world, and also for the auto-
motive industry.
REFERENCES
Altun, S., C. Öner, F. Yaşar, & Firat, M. (2011). Effect of a mixture of biodiesel-diesel ethanol as fuel on
diesel engine emissions. International Advanced Technologies Symposium (IATS’11), Elazığ, Turkey,
16–18.
Çelikten, I. (2011). The effect of biodiesel, ethanol and diesel fuel blends on the performance and
exhaust emissions in a DI diesel engine. Gazi University Journal of Science, 24(2), 341–346.
Hatkard, N., Salunkeg, B. & Lawande, V.R. (2015). The impact of biodiesel-diesel-ethanol blends.
volume 2, Issue 5.
Hoseini, S. (2017). The effect of combustion management on diesel engine emissions fueled with biodie-
sel-diesel blends; Renewable and Sustainable Energy Reviews, 73, 307–331.
Liaquat, A.M., Masjuki, H.H., Kalam, M.A., Rizwanul Fattah, I.M., Hazrat, M.A., Varman, M.,
Mofijur, M. & Shahabuddin, M. (2013). Effect of coconut biodiesel blended fuels on engine perform-
ance and emission characteristics. Procedia Engineering, 56.
Parente, E.J. (2003). Biodiesel: uma aventura tecnológica num país engraçado (1st ed). Fortaleza:
Unigráfica.
Pavan, P. & Venkanna, B.K. (2014). Production and characterization of biodiesel from mackerel fish oil.
International Journal of Scientific & Engineering Research, 5(11).
Shivraj, H., Astagi, V. & Omprakash, D.H. (2014). Experimental investigation on performance, emission
and combustion characteristics of single cylinder diesel engine running on fish oil biodiesel. Inter-
national Journal for Scientific Research & Development (IJSRD), 2(7), ISSN (online): 2321-0613.
Swaroop, C., Tennison, K. Jose, & Ramesh, A. (2016). Property testing of biodiesel derived from coco-
nut testa oil and its property comparison with standard values, ISSN, 2394–6210, 2(2).
Titipong, I. & Ajayk, D. (2014). Biodiesel from vegetable oils. Renewable and Sustainable Energy
Reviews, 31, 446–471.
465
S.S. Bindu, Sulav Kafle, Godwin J. Philip & K.E. Reby Roy
TKM College of Engineering, Kollam, Kerala, India
ABSTRACT: One of the most important application of cryogenics includes the transfer of
cryogenic fluid from storage site to its utilization. To optimize the initial phase of cryogenic
heat transfer, twisted channels are coated with different coating materials, which increases the
chill down efficiency of cryogenic systems. Applications of coating materials like graphene,
CNT, polyurethane and Teflon on twisted channels has significant time saving in cool down
compared to conventional channels surfaces. Computational study was performed to evaluate
the enhancement of heat transfer and coating effectiveness on chill down time. The chill down
of two surfaces uncoated and coated is compared and latter one is found to be more efficient.
Keywords: Chill down, Cryogenics, Liquid Nitrogen, Nucleate boiling, Polyurethane coat-
ings, Twisted channels
NOMENCLATURE
Ta External temperature
Ti Coil initial temperature
T Inlet temperature
qc, qu heat load on coated and uncoated panels.
Uc, Uu overall heat transfer coefficient for coated and uncoated panels.
(∆θ)c, (∆θ)u temperature difference between skin temperature and LN2 for
coated and uncoated panels.
Ac, Au panel area for the coated and uncoated panels.
1 INTRODUCTION
Cryogenic liquids are used in many technological applications: such as propulsion systems,
cooling of superconducting magnets etc. It is also being widely adopted for various clinical
applications. Cryogen transfer involving two-phase flow is an indispensable procedure before
the operation of these systems. This transfer process is characterized by their highly transient
nature. Cryogenic chill down refers to the process by which the temperature of the transfer
line is lowered to the saturation temperature of cryogen. This process is highly unstable and
characterized by large pressure fluctuations accompanied by transient boiling heat transfer.
A team of researchers at MIT (Preston, Mafra, Miljkovic, Kong, & Wang, 2015) studied
the usefulness of ultrathin grapheme coatings on conducting materials using CVD process
and reported that it promotes drop wise condensation. CFD analysis of single-phase flows
through coiled tubes was (Jayakumarar et al. 2010) and found that the fluid particles undergo
oscillatory motion inside the pipe causing fluctuation in heat transfer rates.
Numerical investigation on heat transfer from hot water in shell to cold water flowing in a
helical (Neshat, HossainpourF, & Bahiraee, 2014) coil was made and identified that the mass
flow rate and specific heat of fluids are dependent on the shell side fluid temperature and
geometric parameters of coil.
469
qc = UcAc ( ∆θ ) c (1)
qu = UuAu ( ∆θ ) u (2)
Heat transfer analysis in twisted channel and effect of coating materials are analyzed
by varying coating material, thickness of coating and fluid flow rates. Coating materials
used are Polyurethane, Teflon, Graphene and CNT. Flow rates are varied from 0.1 m/s to
0.01 m/s. Coating thickness are varied from 0.025 mm to 0.1 mm for Polyurethane and
Teflon, and 0.1 mm for Teflon and CNT. The properties of coating materials are shown
in Table 1.
470
3 ANALYSIS
471
Figure 3. Variation of surface temperature with time when inlet velocity is 0.01 m/s, coating thickness
0.1 mm.
3.2 Case 1:(b) Velocity = 0.01 m/s, Coating of Teflon having thickness 0.025 mm
The result obtained is:
In this case, from Figure 4, it is seen that the tube surface temperature is 272.88 K at 40 sec-
onds for uncoated coil and same temperature is at 39.3 seconds for coated one. So, coated coil
takes 0.7 seconds i.e. 1.75% lesser time to reach that temperature.
Figure 5 depicts the variation of velocity magnitude at various section slices along the coil
when liquid nitrogen is passed through at a flow rate of 0.01 m/s. It shows velocity is goes on
decreasing from inlet to outlet. Also at all sections, velocity is higher in outward direction,
this may be due to centrifugal action of fluid.
Figure 6 shows temperature gradient at different time t = 0, 5 and 10 s. Temperature gradi-
ent is greater along interface surface of solid and fluid as this is the area where heat transfer
interaction occurs.
3.3 Case 1: (c) Velocity = 0.01 m/s, Coatings of Graphene and CNT having thicknesses 0.1 mm
The geometric model for this case is shown in Figure 7. From Figure 8 it is seen that the tube
surface temperature is 277 K at 28 seconds for uncoated coil and same temperature is at 23.1 sec-
onds for graphene coated. Similarly, 277 K is at 20.9 seconds for CNT coated coil. So, graphene
coated coil takes 18.2% lesser time and CNT coated takes 25.35% to reach that temperature.
472
Figure 9, depicts temperature gradient variation at inlet section slice of the coil at different
time, t = 1, 10, 20, 30, 40, and 50 s. Temperature gradient is greater along interface surface
of solid and fluid as this is the area where heat transfer interaction occurs. Figure 10 shows
surface temperature variation along the coil. It depicts that the temperature goes on increas-
ing as very cold liquid nitrogen is passed through inlet (lower right end in above figure).
Temperature increases as there is heat transfer from outside hot region to inner cold fluid.
Figures 11 and 12 depicts the variation of average surface temperature with time when LN2
473
474
Figure 12. Variation of surface temperature with time when inlet velocity is 0.1 m/s, coating thickness
0.1 mm.
is flowing through the coil at the flow rate of 0.1 m/s. Average total surface temperature is
decreased when cold fluid LN2 is passed through the tube.
inner cold fluid. Figure 16 depicts the variation of average surface temperature with time
when liquid nitrogen is flowing through the coil at the flow rate of 0.1 m/s. Average total
surface temperature is decreased when coated as cold fluid LN2 is passed through the tube.
In case of coating thickness 0.025 mm, it is seen that the tube surface temperature is 268.1 K
at 59.9 seconds for uncoated coil and same temperature is at 56.4 seconds for coated one. So,
coated coil take 3.5 seconds i.e. 5.85% lesser time to reach that temperature. Again, in case of
coating thickness 0.1 mm, it is seen that the tube surface temperature is 268.1 K at 57.6 sec-
onds for uncoated coil and same temperature is at 59.9 seconds for coated one. So, coated coil
take 2.3 seconds i.e. 4% lesser time to reach that temperature.
476
3.5 Case 2: (b) Velocity = 0.1 m/s, Coating of Teflon having thickness 0.025 mm and 0.1 mm
Figure 17 depicts the variation of velocity magnitude at various section slices along the coil
when liquid nitrogen is passed at a flow rate of 0.1 m/s. It shows that velocity goes on decreas-
ing from inlet (lower part in above figure) to outlet. Also at all sections, velocity is higher in
outward direction, may be due to centrifugal action of fluid.
From numerical study, by using polyurethane, Teflon, CNT and graphene, the cool down
rate is significantly lesser for coated helical channels than uncoated ones. These results should
prove useful in the design of future transfer lines. Future work can employ different coating
materials to cool down transfer lines made of stainless steel and other suitable materials with
different coil shapes.
477
The authors would like to acknowledge the Space Technology laboratory of TKM college
of Engineering and also the Kerala State Council for Science Technology and Environment
(KSCSTE) for providing facilities for the successful completion of the project.
REFERENCES
Allen, L.D. 1965. Advances in Cryogenic Engineering. Texas: Cryogenic Engineering Conference Rice
University Houston.
Goli, P., Ning, H., Li, X., Lu, C.Y., Novoselov, K.S., & Balandin, A.A. 2013. Strong Enhancement of
Thermal Properties of Copper Films after Chemical Vapor Deposition of Graphene.
Jayakumarar, J.S.S.M. Mahajania, J.C. MandalaKannan, N. Iyera, & P.K. Vijayan. 2010. CFD analysis
of single-phase flows inside helically coiled tubes. Computers & Chemical Engineering, 34:430–446.
Maddox, J.P. 1966. Advances in Cryogenic Engineering. (pp. 536–546). New York: Plenum Press.
Neshat, E., Hossainpour F.S., & Bahiraee. 2014. Experimental and numerical study on unsteady natural
convection heat transfer in helically coiled tube heat exchangers. Heat and Mass Transfer, 50 (6):
877–885.
Preston, D.J., Mafra, D.L., Miljkovic, N., Kong, J., & Wang, E.N. 2015. Scalable Graphene Coatings for
Enhanced Condensation Heat Transfer. American Chemical Society, 15: 2902–2909.
Reed, R.P., Fickett, F., & T, L. 1967. Advances in Cryogenic Engineering. 12, pp. 331–339. Plenum Press.
Shaeffer. R, Hu. H, & Chung. J.N. 2013. An experimental study on liquid nitrogen pipe chilldown and
heat transfer with pulse flows. International Journal of Heat and Mass Transfer, 67: 955–66.
478
ABSTRACT: To optimize the cryogenic chilldown of transfer lines and to improve the effi-
ciency of cryogenic systems, chilldown time should be reduced. Such time saving is associated
with reduced consumption of cryogenic fluids. Experiments were performed on helical channels
with a polyurethane coating for different inlet pressures, and the performance of an untreated
helical surface and one with a coating were compared at corresponding pressures. The results
indicated that there are substantial savings in chilldown times with coated surfaces compared to
non-coated ones. Liquid nitrogen was used as the cryogen and was passed through helical coils
made of copper. The significant reduction in chilldown time is observed only after the onset of
a nucleate boiling regime, as indicated by a graph of average surface temperature versus time.
1 INTRODUCTION
The scope of cryogenic fluid use is typically in industries, space exploration, cooling of electronic
components, and in the medical field. Transfer of cryogens to their associated installations is
very important prior to their operation. Cryogen transfer is accompanied by phase changes in
flow, pressure surges and flow reversal. When cryogens are introduced into a transfer line that is
in thermal equilibrium with the ambient temperature, uncontrolled evaporation occurs. In order
to establish a steady flow of fluid during this initial phase, cooling down of equipment is a pre-
requisite, which is termed cryogenic chilldown. To design a cryogenic transfer line, phenomena
such as heat transfer, fluid flow, and changes in pressure across the test line need to be identified
and managed. Observing chilldown processes, Yuan et al. (2008) indicated that the cryogenic
liquid encounters three boiling regimes during chilldown: film, transition, and nucleate boiling.
This differs from boiling experiments in that no external heat is provided in the test section.
Berger et al. (1983) found that helically coiled tubes are superior to straight tubes for heat
transfer applications. Because of the curvature of helical coils, centrifugal force is introduced
resulting in the development of secondary flows, as reported by Dravid et al. (1971). Thus,
the movement of the outermost fluid tends to be faster than that at the inside of the coil,
which increases the turbulence and thereby increases heat transfer. As reported by Cowley
et al. (1962), a reduction in the time taken for cooldown of cryogenic equipment was seen
when metallic components were coated with materials of poor thermal conductivity. Allen
(1966) suggested that heat transfer by virtue of forced convection between the entering liq-
uid and the transfer line wall results in a sudden temperature drop during the initial phases
of the chilldown process. After a brief time period, the rate of temperature drop reduces to
a minimum and is then maintained until chilldown is attained. During this period, the flow
encountered is film boiling with relatively low-velocity gas flow. When low-conducting coating
materials such as Teflon are introduced between the transfer line wall and the cryogen, a ther-
mal gradient is developed resulting in the early attainment of a temperature corresponding
to a nucleate boiling regime. This eventually results in higher rates of heat transfer, leading to
faster chilldown of the line. Maddox and Frederking (1966) reported that heat removal rates
479
2 EXPERIMENTAL APPARATUS
480
2.2 Procedure
Straight copper tube was first coated with polyurethane to a thickness of 0.1 mm. From
this, a helical test section with pitch angle of 8° and the above-mentioned dimensions was
prepared. The thermocouples were fixed on the surface of the test section circumferentially
(120° apart) at five equally spaced locations with three thermocouples in each section (in one
pitch length). The test section was covered first by yarn and then by nitrile rubber insulation.
Polyurethane foam was sprayed onto it for further insulation. The experiment was conducted
at two different inlet flow pressures of 6.89 kN/m2 and 8.61 kN/m2. The equivalent mass flow
rates were calculated to be 93 kg/m2 s and 116 kg/m2 s, respectively.
482
Figure 6. Variation of temperature along tube Figure 7. Variation of temperature along tube
from inlet to outlet at 200 seconds. from inlet to outlet at 300 seconds.
the coil and the fluid. This in turn reduces heat transfer to the fluid, which causes a lower
reduction in temperature compared to the inlet temperature drop.
According to Cowley et al. (1962), chilldown time can be decreased by using a thin layer
of insulating material on the surface of the metal, and they also postulated that it can be
applied to the internal surface of cryopanels. The narrow region in which the maximum heat
transfer exists, after which the nucleate boiling begins, can be widened with the addition of a
low thermal conductivity coating material on the conducting surface. When enough fluid is
present, a thermal gradient is developed because of this layer that helps to attain the tempera-
ture in the critical maximum heat transfer region, resulting in the shortening of chill down
time. From our observations, with an increase in mass flow rate, the chilldown time decreases
as the quantity of fluid flowing through the tube increases. Because the film boiling regime
has a lower heat transfer rate than the nucleate boiling regime, we have focused our compari-
son on the nucleate boiling regimes of the coated and uncoated surfaces. On the basis of the
higher slope of the nucleate boiling regime of the coated tubes and the temperature profiles,
it can be inferred that heat transfer would be enhanced by coating the inner walls of the tube,
resulting in further reduction of chilldown time.
483
In our experiment on the chilldown of polyurethane-coated and uncoated helical coils, it can be
concluded that polyurethane coating of the helical coil has increased the time required for tran-
sition from film boiling to nucleate boiling as a result of the high specific heat of polyurethane.
Because of this high specific heat, the temperature drop of the polyurethane-coated tube
takes longer than the uncoated one. This maintains the flow in a film boiling regime for
longer, but after transition the temperature drop was rapid and for a longer duration. Here,
the temperature drop of the polyurethane is considerably lower, which keeps the temperature
difference higher, causing higher heat transfer compared to the uncoated tube.
Because of the sudden increase in heat transfer after the transition, it was found that the
polyurethane-coated coil has a shorter chilldown time for different mass flow rates. These
results should prove useful in the design of transfer lines. Future work can employ different
combinations of materials for transfer lines and coating materials with lower conductivity.
The effectiveness of different geometries can also be investigated. An in-depth understanding
of this phenomenon can be obtained by considering the heat transfer coefficient variation in
the three flow regimes during the chilldown process.
ACKNOWLEDGMENTS
The authors would like to thank the Space Technology Laboratory of TKM College of
Engineering, and also the Technical Education Quality Improvement Programme Phase II
(TEQIP-II), promoted by the National Project Implementation Unit, Ministry of Human
Resource Development, Government of India, for their support.
REFERENCES
Allen, L. (1966). A method of increasing heat transfer to space chamber cryo panels. Advances in
Cryogenic Engineering, 11, 547–553.
Berger, S., Talbot, L. & Yao, L. (1983). Flow in curved pipes. Annual Review of Fluid Mechanics, 15,
461–512.
Chen, C.-N., Han, J.-T., Jen, T.-C. & Shao, L. (2010). Thermo-chemical characterstics of R134a flow
boiling in helically coiled tubes at low mass flux and low pressure. Thermochimica Acta, 512, 1–7.
Cowley, C., Timson, W. & Sawdye, J. (1962). A method for improving heat transfer to cryogenic fluid.
Advances in Cryogenic Engineering, 7, 385–390.
Dravid, A., Smith, K. & Merrill, E. (1971). Effect of secondary fluid motion on laminar flow heat trans-
fer in helically coiled tubes. AIChE Journal, 17, 1114–1122.
Fsadni, A.M. & Whitty, J.P. (2016). A review on the two-phase heat transfer characteristics in helically
coiled tube heat exchangers. International Journal of Heat and Mass Transfer, 95, 551–565.
Hardik, B. & Prabhu, S. (2017). Critical heat flux in helical coils at low pressure. Applied Thermal
Engineering, 112, 1223–1239.
Jensen, M.K. & Bergles, A.E. (1981). Critical heat flux in helically coiled tubes. Journal of Heat Transfer,
103, 660–666.
Johnson, J. & Shine, S. (2015). Transient cryogenic chill down process in horizontal and inclined pipes.
Cryogenics, 7, 7–17.
Leonard, K., Getty, R. & Franks, D. (1967). A comparison of cooldown time between internally coated
and uncoated propellant lines. Advances in Cryogenic Engineering, 12, 331–339.
Maddox, J. & Frederking, T. (1966). Cooldown of insulated metal tube to cryogenic temperature.
Advances in Cryogenic Engineering, 11, 536–546.
Prabhanjan, D.G., Raghavan, G.S.V. & Rennie, T.J. (2002). Comparison of heat transfer rates between
a straight tube heat exchanger and helically coiled heat exchangers. International Communications in
Heat and Mass Transfer, 29, 185–191.
Yuan, K., Chung, Y.J.N. & Shyy, W. (2008). Cryogenic boiling and two-phase flow during pipe chill-
down in earth and reduced gravity. Journal of Low Temperature Physics, 150, 101–122.
484
1 INTRODUCTION
The solar energy is considered as the most essential, clean and inexhaustibly accessible renew-
able energy. Its two main applications can be classified into heating and generating electrical
energy. Solar energy utilisation has a great scope in the present situation. This work is related
to the utilization of solar energy to solve the transportation problem faces in oil pipelines.
It is observed that, heating is one of the usual methods used to reduce the crude oil viscos-
ity for reducing pressure drop when compared with other chemical treatment methods. In
(Midhun et al. 2015), authors mentioned a novel method of heating crude oil pipelines using
Parabolic Trough Collector (PTC) for reducing pumping power by applying concentrated
solar radiation on pipe surface. Here the authors tried to investigate the pressure drop in
heated oil pipeline and adiabatic pipe and its comparison is made to show the pressure drop
reduction. The hydrodynamic and thermal characteristics of the flow were also investigated
to explain the nature of flow and heat transfer inside the pipe. Also the relevance of a three
485
2 METHOD
In this work, the main motive is to find the effect of Reynolds number of the flow and dif-
ferent level of concentrated solar radiation on pressure drop reduction in oil pipelines. Thus
it is necessary to conduct a single phase three dimensional analysis of the flow through an
oil pipeline to determine the pressure drop inside the pipeline along with hydrodynamic and
thermal characteristics of the flow. So this analysis mainly focuses on the impact of variation
of Reynolds number of the flow and heat flux on reduction of pressure drop in oil pipelines
heated by PTC. Numerical analysis is done by using CFD software tool ANSYS FLUENT
14.5.
The steady state equation for conservation of mass or can be written as follows:
∂
∂xi
( ρui ) = 0 (1)
Momentum Equation:
∂ ∂P ∂ ∂u ∂u j 2 ∂u
∂xi
( )
ρui u j = − +
∂xi ∂xi
( µ + µt ) i + − ( µ + µt ) l δ ij + ρ gi
∂x j ∂xi 3 ∂xl
(2)
Energy equation:
∂ ∂ µ µ ∂T
∂xi
( )
ρui u j = + t + SR
∂xi Pr σ T ∂xi
(3)
The most frequently adopted turbulence model is the k – ε models. Transport equation for
Realizable k-ε Model:
∂ ∂ ∂k
∂x j
(
ρ ku j =) ( µ + µt )
∂xi
+ Gk + Gb − ρε −YM + Sk
∂x j
(4)
∂ ∂ µt ∂ε ε2 ε
∂xi
( ρε ui ) = µ +
∂xi
+ ρC1Sε − ρC2
σ ε ∂xi
+ C3 C3εGb + Sε (5)
k + ϑε k
486
Figure 2. Heating of oil pipeline by applying concentrated solar radiation on base surface (with heat loss).
487
from (Tavakoli and Baktash 2012) where temperature T is in °C. The correlation used for
dynamic viscosity (µ) from (Sattarin et al, 2007) for light dead crude oil is,
b
e API
µ =a× (7)
API
qg′′′ ×
π
4
( ( D + 10
0
−6
)
2
)
− D0 2 × L = q′′ × π D0 × L (8)
By using mixed type of wall boundary condition, the heating effect (by heat generation)
along with convection and radiation losses from the pipe surface can be considered. The
combined heat transfer coefficient for forced convection and natural convection loss from the
pipe to the surrounding can be calculated by a correlation against wind speed as from (Duffie
and Beckman 2013):
488
from that Tsky = 286.82 K and the atmospheric temperature is taken as T∞ = 300 K. The emis-
sivity of sky εsky is calculated by using Trinity equation:
The main objective of the present work is to find the effect of Reynolds number of the flow
and different level of concentrated solar radiation on pressure drop reduction in oil pipelines.
The heat loss from the pipe to the surroundings is also considered in this analysis, so it can
be considered as almost a practical case. Due to this heat loss, naturally the effect of heating
in reducing pressure drop is less when compared to the ideal case (without heat loss) except
at higher level of heating. Here the hydrodynamic and thermal characteristics of the flow are
analysed and compared with ideal case in-order to find the effect of heat loss. The pipe line is
provided with non-uniform heating across the section, so average value of temperature, fric-
tion factor and Nusselt numbers at each section of the oil pipeline are considered here.
3.1 Effect of Reynolds number at constant heat flux (with heat loss)
Pressure distribution in the adiabatic pipe and heated pipe at different Reynolds number
can be compared from the Figure 3. The pressure drop in adiabatic pipe is higher than the
heated pipe for each Reynolds number. This pressure drop reduction in heated pipe is due to
decrease in viscosity.
Figure 3 shows the comparison of pressure variation with axial length at constant heat flux
and adiabatic condition under different Reynolds number. The effect of heat loss is not much
reflected in the reduction in pressure drop. The nature of pressure curve is similar to that
of the ideal case (without heat loss). The difference between the pressure drop in heated oil
pipe having thermal leakage and heated oil pipe at ideal case is very small. It is observed that
Figure 3. Pressure distribution along the heated Figure 4. Comparison of average friction fac-
pipe (at constant concentrated solar radiation of tor of the heated pipe along dimensionless lengt-
80000 W/m2) and adiabatic pipe under different hat concentrated solar radiation = 80000 W/m2
Reynolds number. under different Reynolds number.
489
3.2 Effect of different heat flux (concentrated solar radiation) at constant Reynolds number
In this analysis the Reynolds number at the inlet is maintained at 3000 while the concentrated
radiation (heat flux) is varied around the bottom of the oil pipe surface. Here also the heat
loss from the pipe is considered.
Sudden decreases in friction factor due to decrease in viscosity by heating can be observed
from Figure 6, but after that f attains a very small rise along the axial length. This is because
the density of fluid near to the wall decreases further while the viscosity variation is very
small. The velocity of fluid near to the wall increases because of decrease in density. This
increase in fluid velocity leads to the increase in wall shear stress because of high velocity
gradient at the wall surface. Thus the friction factor began to increase gradually up to outlet
of the pipe. It leads to an adverse effect on pressure drop reduction with increase in heat flux
which is analysed further in detail by obtaining plot for pressure drop versus heat flux.
Figure 7 shows the comparison of pressure drop between oil pipes at ideal case and with
thermal leakage at constant Re under different heat flux. Pressure drop in heated pipe with-
out heat loss and with heat loss are plotted against heat flux. The pressure drop increase is
490
due to the increase in friction factor due to increase in velocity gradient. This friction factor
increases with heat flux after a particular value of heat flux. This may depend on the length
of heating, heat loss and fluid properties. It is clear that the heat loss has some effect in the
pressure drop reduction. The pressure drop curve is shifted towards right side. Pressure drop
in ideal case (without heat loss) is lesser than the pressure drop in pipe with heat loss up to
certain limit of heat flux after that the pressure drop in pipeline with heat loss is found to be
higher than ideal case.
3.3 Validation
Average Nusselt number obtained from the CFD analysis are used for validation purpose.
Nusselt number is compared with the value obtained from Gnelinskie’s correlation. Here
the Re = 3000 at inlet and the concentrated solar radiation as 40000 W/m2 with heat loss
condition.
Fluid properties were calculated based on this bulk mean temperature. While the average
Nusselt number for oil pipeline with heat loss obtained from analysis agrees with Gnelinski’s
correlation (Eq. 11) with an error of 6.48%. This may be due to the effect of severe property
variation and large difference between wall temperature and fluid temperature.
Nu =
( f
8 ) ( Re 1000) Pr
− ×
, (11)
1 + 12.7 ( f 8 ) (Pr − 1)
0.5
2
3
4 CONCLUSIONS
The utilisation of PTC for pressure drop reduction in oil pipelines by applying concentrated
solar radiation on the pipe surface, in order to reduce pumping power was analysed. Pressure
variation in heated oil pipeline was compared with adiabatic oil pipeline for corresponding
Reynolds number of flow to determine the effect of heating in pressure drop reduction. The
analysis shows that with increase in Reynolds number, the effect of heating in pressure drop
reduction is getting significantly increased. Heating effect is found to be more effective in higher
Reynolds number. The pressure drop versus different concentrated solar radiation was plotted.
Also the effect of pressure drop reduction varies with different concentrated solar radiation in a
peculiar manner. At first the pressure drop gets reduced with increase in heat flux but after that
pressure drop curve showed a reverse trend due the increase in friction factor.
491
D Diameter (m)
f Friction Factor
h Convective heat transfer coefficient (W/m2 K)
k Thermal conductivity of fluid (W/mK)
Nu Nusselt number
P Pressure (Pa)
Pr Prandtl number
qg Volumetric heat generation rate (W/m3)
q” Rate of heat transfer per unit area (W/m2)
Re Reynolds number
T Temperature (K)
u Velocity (m/s)
X Axial length (m)
Greek symbols
ε Emissivity
µ Dynamic viscosity (Ns/m2)
ν Kinematic viscosity (m2/s)
ρ density (kg/m3)
REFERENCES
Cengel Y.A. 2013. Heat Transfer A Practical Approach. Cambridge: Cambridge University Press.
Duffie, J.A., & Beckman, W.A. 2013. Solar Engineering of Thermal Processes. Hoboken, NJ, USA: John
Wiley & Sons, Inc.
Forristall, R. 2003. Heat Transfer Analysis and Modeling of a Parabolic Trough Solar Receiver Imple-
mented in Engineering Equation Solver. Golden, Colo.: National Renewable Energy Laboratory,
NREL/TP; 550-34169.
Hart, A. 2014. A Review of Technologies for Transporting Heavy Crude Oil and Bitumen via Pipelines.
Journal of Petroleum Exploration and Production Technology 4(3): 327–36.
Mammadov, F.F. 2006. Application of Solar Energy in the Initial Crude Oil Treatment Process in Oil
Fields. Journal of Energy in Southern Africa 17(2): 27–30.
Martínez-Palou, R., Mosqueira, M. de L., Zapata-Rendón, B., Mar-Juárez, E., Bernal-Huicochea, C.,
de la Cruz Clavel-López, J., & Aburto, J. 2011. Transportation of Heavy and Extra-Heavy Crude Oil
by Pipeline: A Review. Journal of Petroleum Science and Engineering 75(3–4): 274–82.
Midhun V.C., Shaji, K. & Jithesh, P.K. 2015. Application of Parabolic Trough Collector for Reduction
of Pressure Drop in Oil Pipelines. International Journal of Modern Engineering Research 5(3): 40–48.
Price, H., Lüpfert, E., Kearney, D., Zarza, E., Cohen, G., Gee, R., & Mahoney, R. 2002. Advances in
Parabolic Trough Solar Power Technology. Journal of Solar Energy Engineering, 124(2), 109.
Matthew Roesle, Volkan Coskun, & Aldo Steinfeld. 2011. Numerical Analysis of Heat Loss From a
Parabolic Trough Absorber Tube With Active Vacuum System. Journal of Solar Energy Engineering
133(3): 31015.
Saniere, A., Hénaut, I. & Argillier, J.F. 2004. Pipeline Transportation of Heavy Oils, A Strategy, Eco-
nomic and Technological Challenger: Oil and Gas. Science and Technology-Rev. IFP 59(5): 455–466.
Sattarin, M., Modarresi, H. & Teymori, M. 2007. New Viscosity Correlations for Dead Crude Oils.
Petroleum & Coal 49(2): 33–39.
Sharma, V. B & Mullick, S.C. 1991. Estimation of Heat-Transfer Coefficients, the Upward Heat Flow,
and Evaporation in a Solar Still. Journal of solar energy engineering 113(1): 36–41.
Tavakoli, A., & Baktash, M. 2012. Numerical Approach for Temperature Development of Horizontal
Pipe Flow with Thermal Leakage to Ambient. International Journal of Modern Enginering Research
2(5): 3784–94.
492
S. Parvathi, V.P. Nithin, S. Nithin, N. Nived, P.A. Abdul Samad & C.P. Sunil Kumar
Government Engineering College, Thrissur, Kerala, India
ABSTRACT: A burner having a conical bluff body with a central air injector is consid-
ered. In this paper, the effects of the central air jet on the heat load of the bluff body are
investigated. The flame structures and the flame blowoff temperatures were compared with
corresponding simulated outcomes. Simulation results show that the considerable reduction
in the heat load to the bluff body by the central air jet determined experimentally is quite
valid. Thus the problem caused by the high heat load in practical applications has a solu-
tion. The addition of central air jet alters the flame structures and blowout temperatures, as
shown in simulation as well as experiment. Various blowout behaviors caused by the air jet
observed experimentally also match those that were simulated. It is evident from simulation
and experimentation that the center air injection could cool down the bluff-body. However,
the flame stability could not be accomplished.
1 INTRODUCTION
493
2 METHODOLOGY
The schematic of the conical bluff-body burner is shown in Figure 1. After the literature sur-
vey, the simulation model was decided. Since the bluff body burner was symmetrical, a 2-D
simulation was chosen. The model for the same was created in Ansys Fluent software, which
is based on the finite volume method. The general transport equations for mass, momentum,
energy etc. are applied to each cell and discretized. All equations are then solved to render
the flow field. As in the experiment carried out by Tong et al. (2017), a 45° conical bluff-
body was placed in the center of the burner. The inner diameter of the circular pipe for the
methane-air flow is 30 mm. The bluff-body has top and inner diameters of 14 mm and 4 mm
respectively. The thickness of the pipe wall is 2 mm. Premixed methane-air is fed through the
annular channel. Air is injected through the central pipe. The mass flow rate of the methane-
air mixture is carried by varying the equivalence ratio of the same. The equivalence ratio is
set between Φannular = 0.64 to blowoff limit.
The boundary conditions are applied as per the experiment. The velocities are varied
accordingly. The combustion equation of methane is chosen as given in the Fluent software.
As cited in the literature (Euler et al. 2014), the temperature distribution is highest at the
center of the bluff-body. Tapex is taken as the temperature at the apex of the bluff-body sur-
face. For the boundary condition, the emissivity (ε) of the bluff-body is set as 0.58 (ε of
stainless steel varies 0.54 to 0.63). Taking central air jet velocity U-jet = 0, annular velocity
U-annular = 2.77 m/s and the annular flow equivalence ratio Φannular = 0.64, the temperature at
the apex of the injection hole is approximately 480 K, which is referred as T0 as the reference.
Fluent is used for simulation throughout the study. Figure 2 shows temperature changes of
Tapex when shutting down the fuel supply in simulation as well as in the experiment.
The fuel supply is shut down by reducing the mass flow rate of the methane-air flame
gradually to zero. Because of the weak flame attached to the bluff-body, the temperature
decreases with a sharp slope at first. Thereafter, due to the heat convection to the environ-
ment, when the effect of the flame completely disappears, the surface temperature changes
slowly over time. After the blowoff, the rate of decrease of temperature is less than 3 K/s in
both cases. This temperature is taken as Tapex.
In the simulation as well as the experimental result, the temperature before shutting down
the fuel supply is 513 K at 1 sec. After shutting down fuel supply, at 0 sec, the temperature
drops down to 481 K, in both cases.
In both cases, we have selected the temperature of the bluff body surface at times within
1 second after the flame is totally blown off. Thus the simulated pattern matches the experi-
mental results.
494
∂ ∂ ∂
Continuity: ( ρu ) + ( ρv ) + ( ρw ) = 0
∂x ∂y ∂z
∂ ( ρuu ) ∂ ( ρuv ) ∂ ( ρuw ) ∂p ∂τ xx ∂τ xy ∂τ xz
Momentum: + + =− + + + + ρ fx
∂x ∂y ∂z ∂x ∂x ∂y ∂z
∂ ( ρvu ) ∂ ( ρvv ) ∂ ( ρvw ) ∂p ∂τ ∂τ ∂τ
+ + = − + yx + yy + yz + ρ f y
∂x ∂y ∂z ∂y ∂x ∂y ∂z
∂ ( ρwu ) ∂ ( ρwv ) ∂ ( ρww ) ∂p ∂ τ ∂τ ∂τ
+ + = − + zx + zy + zz + ρ fz
∂x ∂y ∂z ∂z ∂x ∂y ∂z
ρ – density, P – pressure, ζ – shear stress, f – body force, U,v,w – velocity components in
x,y,z directions.
Figure 3. Comparison of effects of central air jet on the temperature of bluff-body surface in experi-
ment. simulation and steady state.
Figure 4. Blowoff temperature distribution with respect to bluff body face radial distance.
496
Figure 5. Steady state temperature distribution with respect to bluff body face radial distance.
497
With conditions of Ujet/Uannular = 0, 1, 2.46, 4.87 and 8.8 (from left to right), variations of
temperature in K with respect to axial distance are shown in Figure 7.
From Figure 6, it is obvious that the temperature downstream of the bluff-body is the high-
est without the introduction of the central air jet. When the central air jet is injected, it creates
a layer separating the main heat release zone and the flame, thereby appearing to get attached
to the surface of the bluff-body. The fresh cold air from the central jet fills the non-luminous
recirculation zone. With the addition of central air jet, the fuel air ratio decreases creating
leaner. The flame becomes weaker due to the reduction in size of the heat release zone. That
is why the bluff-body temperature with central air jet is smaller than T0. With small velocity
ratios of Ujet/Uannular, the annular flow dominates the flow field making the temperature with
Ujet/Uannular ∼ 1 the lowest, as is evident in Figure 7. When the recirculation zones downstream
of the bluff-body are dominated by the annular flow, to avoid the flame getting attached to
the bluff-body, the small amount of central air jet may form a fresh air layer. Also the heat
convection of burnt products to the bluff-body is reduced by the cold central air jet layer.
As is evident from Figure 7, the peak temperatures occur at axial distances of 150 mm,
200 mm, 260 mm and 300 mm as velocity ratios change from 1, 2.46, 4.87 and 8.8 respec-
tively. This means that the heat releasing zone becomes larger and travels to farther distances
with increases in central air jet velocity. Obviously in the absence of central air jet, the peak
temperature of 1,550 K occurs at 50 mm distance and the heat releasing zones cling together
in small region.
6 CONCLUSION
In this paper, the effects of central air jet on the bluff body stabilized premixed methane-air
flame, namely bluff-body surface temperature, flame blowoff temperature and flame struc-
tures are studied by simulation which, in turn, resulted in the validation the corresponding
experimental outcomes given by Tong et al. In both cases, it can be seen that the central air
jet reduces the heat load on the bluff-body surface. But on further addition of the air jet, the
flame becomes unstable. The flame blows off easily with the central air jet. Variation of apex
temperatures with different velocity ratios, variation of flame blowoff temperature with face
radial distance and variation contour of temperature in flame structure with different veloc-
ity ratio are thus validated by simulation study.
498
Chaparro, A.A. & Cetegen, B.M. (2006). Blowoff characteristics of bluff-body stabilized conical
premixed flames under upstream velocity modulation. Combustion and Flame, 144(1), 318–335.
Chaudhuri, S. & Cetegen, B.M. (2009). Blowoff characteristics of bluff-body stabilized conical premixed
flames in a duct with upstream spatial mixture gradients and velocity oscillations. Combustion Sci-
ence and Technology, 181(4), 555–569.
Esquiva-Dano, I., Nguyen, H.T. & Escudie, D. (2001). Influence of a bluff-body’s shape on the stabiliza-
tion regime of non-premixed flames. Combustion and Flame, 127(4), 2167–2180.
Euler, M., Zhou, R., Hochgreb, S. & Dreizler, A. (2014). Temperature measurements of the bluff
body surface of a Swirl Burner using phosphor thermometry. Combustion and Flame, 161(11),
2842–2848.
Guo, P., Zang, S. & Ge, B. (2010). Technical brief: predictions of flow field for circular-disk bluff-body
stabilized flame investigated by large eddy simulation and experiments. Journal of Engineering for
Gas Turbines and Power, 132(5), 054503.
Lefebvre, A.H. & Ballal, D.R. (2010). Gas turbine combustion. CRC Press.
Longwell, J.P., Frost. E.E. & Weiss, M.A. (1953). Flame stability in bluff body recirculation zones.
Industrial & Engineering Chemistry, 45(8), 1629–1633.
Longwell, J.P. (1953). Flame stabilization by bluff bodies and turbulent flames in ducts. Symposium
(International) on Combustion. Elsevier, 4(1).
Pan, J.C., Vangsness, M.D. & Ballal, D.R. (1991). Aerodynamics of bluff body stabilized confined tur-
bulent premixed flames. ASME 1991 International Gas Turbine and Aeroengine Congress and Exposi-
tion. American Society of Mechanical Engineers.
Roquemore, W.M., Tankin, R.S., Chiu, H.H. & Lottes, S.A. (1986). A study of a bluff-body combustor
using laser sheet lighting. Experiments in Fluids, 4(4), 205–213.
Shanbhogue, S.J., Husain, S. & Lieuwen, T. (2009). Lean blowoff of bluff body stabilized flames: Scal-
ing and dynamics. Progress in Energy and Combustion Science, 35(1), 98–120.
Tang, H., Yang, D., Zhang, T. & Zhu, M. (2013). Characteristics of flame modes for a conical bluff body
burner with a central fuel jet. Journal of Engineering for Gas Turbines and Power, 135(9), 091507.
Wright, F.H. (1959). Bluff-body flame stabilization: blockage effects. Combustion and Flame, 3,
319–337.
Tong, Y., Li. M., Thern, M., Klingmann, J., Weng, W., Chen, S. & Li, Z. (2017). Experimental investi-
gation on effects of central air jet on the bluff body stabilized premixed methane-air flame, Energy
Procedia, 107, 23–32.
Zukoski, E.E., Marble, F.E. (1955a). The role of wake transition in the process of flame stabilization on
bluff bodies. AGARD Combustion Researches and Reviews, 167–180.
Zukoski E.E., Marble F.E. (1955b). Gas dynamic symposium on aerothermochemistry. Northwestern
University, Evanston, IL.
499
1 INTRODUCTION
For any item of equipment that has a vital role in a production sequence, a hazard in such
critical equipment will affect the entire production and create greater damage, so requires sig-
nificant care. The identification of Energy-Intensified Equipment (EIE) is one of the crucial
phases of Reliability-Centered Maintenance (RCM), through combination of quantitative
analysis with qualitative analysis (Barabady & Kumar, 2008). This research has been carried
out in a chemical company called Travancore Cochin Chemicals (TCC) Limited, situated in
Ernakulam district, Kerala, India. The chemical processing industry plays an essential role in
the manufacture of many chemicals, such as caustic soda, sodium chloride, chlorine, sulphu-
ric acid, hydrochloric acid and bleaching powder. There are more than 600 types of equip-
ment involved in their production. The industry provides a tremendous variety of materials
to other manufacturers, such as textiles, rayons, plastics, aluminum, detergents, drugs, ferti-
lizers, food preservatives, and paper-producing industries. It also produces chemical prod-
ucts that benefit people directly. Several changes in equipment have been taking place in the
processing of chemicals, and chemical processing industries in India are facing certain chal-
lenges that need to be addressed for their survival in the era of globalization. This analysis
501
2 LITERATURE REVIEW
This literature review is a brief review of research conducted in the areas of reliability and
maintenance programs of equipment, in order to identify the equipment and tools to be
adopted for the present study. Initially, the literature reveals several methods that can be used
to identify equipment, based on classifications such as ABC/Always Better Control, Vital/
Essential/Desirable (VED), Scarce/Difficult/Easily available (SDE), High/Medium/Low
(HML), and Fast/Slow/Non-moving (FSN). methods have not found the criticality, mutila-
tion causing from failure and the failure modes as well as to plan an optimum maintenance
program (Sanjeevy & Thomas, 2014).
Today, the methods most commonly applied in this field are Failure Mode Effect and
Criticality Analysis (FMECA) and Reliability-Centered Maintenance (RCM). Reliability
can be expressed as the possibility of process or equipment which perform its function or
task under stated environment for a definite surveillance period. Reliability analysis meth-
ods have been increasingly accepted as typical tools for the development and management
of regular and intricate processing methods since the mid-1980s. The occurrence of failure
cannot be prevented completely, but it is important to reduce both its chance of happen-
ing and the impact of failures when they occur (Barabady & Kumar, 2008). To sustain
the intended reliability, availability, and maintainability features and to attain expected
performance, a valid maintenance plan is essential. Both corrective and preventive mainte-
nance have direct consequences on the reliability of equipment and, consequently, its per-
formance. Hence the identification of energy-intensified equipment is critical for reliability
evaluation procedures. To overcome the limitations of the traditional classifications such
as the selective inventory controls of ABC, VED, SDE, HML, and FSN, and evaluation
using FMECA methodology, the present methodology is adopted. Ben-Daya and Raouf
(1996) noted that the economic model proposed by Gilchrist (1993) addresses a problem
that differs from the problem FMECA is intended to address. They combined the expected
cost model proposed by Gilchrist with their improved Risk Priority Number (RPN) model
in order to provide a quality improvement technique at the production stage. They also
confirmed that if the assessment of the factor scores on a 1 to 9 scale is not appropriate
then the treatment of identical significance is not practical. According to their model, the
probability of an event should be more significant and their model suggests the probability
of an event (with scale 1–9) is increased to the power of 2 (Tang et al., 2017; Puthillath &
Sasikumar, 2012).
There are two kinds of criticality analysis: quantitative and qualitative. To use the quan-
titative criticality analysis method, the investigation group has to identify the dependabil-
ity/unpredictability for every element, in a specified working period, to recognize the part
of the element’s unpredictability that can be attributed to each probable failure mode, and
rate the possibility of loss (or severity) that will result from each failure mode that can
occur (Sachdeva et al., 2009). Several authors make use of fuzzy set theory to tackle uncer-
tainties in maintenance decision-making, Chang et al. (1999) argued for the use of gray
theory to obtain critical valuations. The use of fuzzy logic theory for maintenance-critical
inquiry is also suggested in the literature (Eti et al., 2006; Teng & Ho, 1996; Jayakumar &
Asgarpoor, 2004).
502
In reality, all items of equipment cannot be controlled with equal attention. An effective criti-
cal equipment identification calls for an understanding of the nature of care. Some equip-
ment may be very important while some is too small or too unimportant to call for a rigorous
and intensive mechanism. Criticality analysis means variances in the method of control from
equipment to equipment, founded on the basis of physical factors. The criterion used for
this purpose may be criticality, risk, maintenance difficulties, or something else. Controlling
the area of operation for good performance involves the time, money and effort required to
conduct operations that take less time and avoid sudden damage. Therefore, to achieve this
objective it need not be necessary to control the entire area of operations but only that area
of operation that is not controlled and is likely to cause damage. Thus, criticality analysis
means selecting the areas of mechanism so that the required objective is achieved as early as
possible without the loss of time that would be involved in taking care of the full area.
503
504
505
5 CONCLUSION
The identification of EIE is one of the key phases in elimination of accidents and severe
damage to production systems. The current investigation demonstrates that a methodical
approach has been implemented to distinguish the EIE and address problems arising in the
chemical processing industry. A number of proposals are suggested in our discussion of criti-
cality analysis for caring for such types of equipment and improving the existing maintenance
strategy. In addition, it is very clear from the literature survey that there are many methods
available for analyzing the reliability and maintenance program of equipment and lacks in to
identify the equipment with hazard involved in failure. This literature review is intended to
provide an idea of the preceding works that facilitated this work, helping to identify various
significant methodologies used in this field. Moreover, it assists in selecting the appropriate
procedure to identify the energy-intensified equipment for reliability analysis and the tool
for developing the maintenance program. The present work extends the scope of detailed
analysis in energy-intensified equipment.
REFERENCES
Barabady, J. & Kumar, U. (2008). Reliability analysis of mining equipment: A case study of a crushing
plant at Jajarm bauxite mine in Iran. Reliability Engineering and System Safety, 93, 647–653.
Ben-Daya, M. & Raouf, A. (1996). A revised failure mode and effect analysis model. International Jour-
nal of Quality & Reliability Management, 13(1), 43–47.
Eti, M.C., Ogaji, S.O.T. & Probert, S.D. (2006). Development and implementation of preventive-main-
tenance practices in Nigerian industries. Applied Energy, 83, 1163–1179.
Gilchrist, W. (1993). Modeling failure modes and effect analysis. International Journal of Quality and
Reliability Management, 10, 16–23.
Jayakumar, A. & Asgarpoor, S. (2004). Maintenance optimization of equipment by linear program-
ming. Probability in Engineering and Information Science, 20, 183–193.
Puthillath, B. & Sasikumar, R. (2012). Selection of maintenance strategy using failure mode effect and
criticality analysis. International Journal of Engineering and Innovative Technology, 1(6), 73–79.
Sachdeva, A., Kumar, D. & Kumar, P. (2009). Multi-factor failure mode critically analysis using TOP-
SIS. Journal of Industrial Engineering International, 5(8), 1–9.
Sanjeevy, C. & Thomas, C. (2014). Use and application of selective inventory control techniques of
spares for a chemical processing plant. International Journal of Engineering Research & Technology,
3(10), 301–306.
Tang, Y., Liu, Q., Jing, J., Yang, Y. & Zou, Z. (2017). A framework for identification of maintenance
significant items in reliability centered maintenance. Energy, 118, 1295–1303.
Teng, S.-H. & Ho, S.-Y. (1996). Failure mode and effects analysis: An integrated approach for
product design and process control. International Journal of Quality & Reliability Management,
13, 8–26.
506
507
ABSTRACT: This article presents a Fractional Filter Fractional Order Proportional Inte-
gral Derivative (FFFOPID) controller design method for higher order systems, approximated
as Non-Integer Order Plus Time Delay (NIOPTD) models. The design uses an Internal
Model Control (IMC) scheme and the resulting controller has a series form of Fractional
Order Proportional Integral Derivative (FOPID) term, in series with a fractional filter. An
analytical tuning method is then used to identify the optimum controller settings. Simulation
results on different systems show that the proposed method gives better output perform-
ance for set point tracking, disturbance rejection, parameter variations, and for measurement
noise in the output. The robust stability of the system regarding process parametric uncer-
tainties is verified with robustness analysis. Controllability index analysis is also undertaken
to ascertain the closed loop system performance and robustness.
Keywords: Internal Model Control, robust stability, closed loop system, Fractional Order
Proportional Integral Derivative
1 INTRODUCTION
Higher order models describe the process dynamics more accurately than lower order models
(Isaksson & Graebe, 1999; Malwatkar et al., 2009). However, they complicate the controller
design and tuning for quality control. There are several controllers tuning rules for higher
order models, approximated as First Order Plus Time Delay (FOPTD) models. Most of these
rules are to tune a controller having a Proportional Integral Derivative (PID) structure, which
has been widely used to date (Aström & Hägglund, 1995; Skogestad, 2003). The controller
designed for such FOPTD models may not give the satisfactory performance, as the dynamics
are compromised during the approximation. An alternative to preserve the dynamics, while
ensuring satisfactory control, is to approximate them as Non-Integer Order Plus Time Delay
(NIOPTD) models (Pan & Das, 2013).The major advantage of NIOPTD models is that they
represent the process behavior more compactly than integer order systems (Podlubny, 1999).
Fractional order control for fractional order systems has been in focus in the last two
decades (Shah & Agashe, 2016). Several fractional order controller structures have been pro-
posed and the widely accepted one is the Fractional Order Proportional Integral Derivative
(FOPID) controller (Monje et al., 2008; Luo & Chen, 2009; Tavakoli-Kakhki & Haeri, 2011;
Padula & Visioli, 2011; Vinopraba et al., 2012; Das et al, 2011; Valério & da Costa, 2006).
The FOPID controller has the ability to enhance the closed loop performance, but the tun-
ing is complex as it has more tuning parameters than the PID controller. Recently, there was
work found in the literature where the five FOPID parameters are identified, based on the
stability regions of a closed loop system (Bongulwar & Patre, 2017). Further, the simulation
results were shown only for the servo response.
In this paper, a Fractional Filter Fractional Order Proportional Integral Derivative
(FFFOPID) controller is proposed using Internal Model Control (IMC). The present work
uses a series form of a FOPID controller (Hui-fang et al., 2015). The resulting controller has
511
2 PRELIMINARIES
G ( s ) = G + ( s )G − ( s ) (1)
f (s)
CIMC ( s ) = − (2)
G (s)
CIMC ( s )
C (s) = (3)
1− CIMC ( s )G ( s )
512
τ sλ + 1
C ( s ) = ( fractional filter )K p i λ ( 1 + τ d s µ ) (4)
τ is
To design the controller, the higher order system approximated as a NIOPTD model is
considered and is given by Equation 5:
Ke− Ls
G ( s ) = α (5)
Ts +1
where K, L, T and α are gain, time delay, time constant and fractional order of the model,
respectively. The feedback controller C(s), according to the procedure in Section 2.1 by
using the IMC filter f(s) = 1/(γsp+1) and first order Pade’s approximation for time delay
e-Ls = (1–0.5 Ls)/(1+0.5 Ls) is:
1 L 1+ 0.5Ls
C proposed ( s ) = ×
2 K 0.5Ls
( 1+ Tsα ) (6)
0.5γ Ls + γ s + L
p p− 1
L
Kp = ;τ i = 0.5L; λ = 1;τ d = T ; µ = α
2K
1 (7)
fractional filter =
0.5γ Ls p + γ s p− 1 + L
3.1 Tuning
The tuning parameters γ and p are chosen in a way that the measures IAE and TV are minimal.
The optimum values are identified through the behavior of IAE and TV by varying γ and p in the
range of (-10%, +10%). Finally, γ is chosen for the minimum of both IAE and TV, and also p.
4 ROBUSTNESS ANALYSIS
The stability of a closed loop system should always be analyzed for process parameter uncer-
tainties because the process model is an approximation of the real plant. The robust stability
condition (Morari & Zafiriou, 1989) is:
∆K −∆L
T ( jω ) ∞ <1 +1 e − 1 (9)
K
513
where 1-T(s) is the sensitivity function and wm(s) is the uncertainty bound on the sensitivity
function.
5 SIMULATION STUDY
Three higher order systems approximated as NIOPTD models are simulated in MATLAB and
the system performance is compared with the Bongulwar and Patre (2017) method (hereafter
addressed as the Patre (2017) method). The effectiveness of the proposed method is verified
with the performance measures %OS, ST, IAE and TV, which are defined in Table 1.Settling
time is defined as the time taken for the response to settle within 2% to 5% of its final value.
The closed loop system’s unit step response is observed, with step change in disturbance
of magnitude 0.5 applied at a later time. Also, the step response is observed for perturba-
tions of +10% in Land K and for measurement noise with a variance of 0.0001. The system
robustness for uncertainty is illustrated in the following sections through stability analysis.
The frequency used for Oustaloup approximation of fractional order is (0.01,100)rad/s. In
addition, the trend of closed loop behavior is interpreted for variation in controllability index
(i.e. L/T ratio in the range of 0.1 to 2). This analysis demonstrates the difficulty in control for
large changes in time delay.
5.1 Example 1
Consider the higher order system (Shen, 2002) and its equivalent NIOPTD (Patre, 2017)
model:
1 0.99149
G1 ( s ) = = e−1.6745 s (11)
( s +1)4 2.8015s1.0759 +1
1 0.8372 s + 1 (12)
C proposed ( s ) = × 0.99149 ( 1 + 2.8015s )
1.0759
0.7535s1.01 + 0.9s 0.01 + 1.6745 0.8372 s
0.3432
Cold ( s ) = 1.5129 + + 0.1733s1.05 (13)
s1.1
The optimum values of γ and p for the proposed controller are identified as 0.9 and 1.01
(Figure 2). The performance measures for set point tracking are given in Table 2. The pro-
posed method is superior in performance compared to the Patre (2017) method with lower
values of %OS, ST, IAE and TV. The servo response for a disturbance applied at t = 30 s is
illustrated in Figure 3. Betterment is observed, even with disturbance with lower values of
performance measures, which is clear from Table 3. It is evident from Figure 3 that a satis-
factory performance is observed in terms of disturbance rejection. Figure 4 shows the step
response for a perturbed model, and the response for white noise in the measured output is
illustrated in Figure 5. It is observed that there is enhanced performance (Table 2) for both
%OS IAE TV
y peak − yss ∞ ∞
yss
× 100 ∫ e(t ) dt ∑ ui + 1 − ui
0 i= 0
514
the cases with the proposed method, as compared to the Patre (2017) method. Also, there is
significantly less control effort with the proposed method for all the possible input changes.
The magnitude plot is shown in Figure 6 for +10% uncertainty in K; +10% and +50% uncer-
tainty in L. Robust stability condition in Equation 9 is violated by both the complementary
sensitivity functions for +50% uncertainty in time delay. The proposed method violates the
515
Figure 7. L/T ratio versus IAE, TV for step change in set point.
516
condition a bit earlier than the old method. Figure 7 and Figure 8 show the trends of IAE
and TV for servo and regulatory response with increasing L/T ratio. It is evident that increas-
ing trends are observed with the Patre (2017) method, compared to the proposed method.
Hence, the proposed method can be considered for enhanced closed loop performance of
processes with large changes in time delay.
5.2 Example 2
The second example (Chen et al., 2008; Patre, 2017) considered for performance comparison is:
9 1.0003
G2 ( s ) = = e−0.4274 s (14)
( s + 1)( s 2 + 2s + 9) 0.8864s1.0212 + 1
The proposed controller and the controller with Patre (2017) method are:
1 0.2137 s + 1
C proposed ( s ) =
0.07479s1.01 + 0.35s 0.01 + 0.4274
× 0.2136
0.2137 s
( 1+ 0.8864s1.0212 ) (15)
1.2203
Cold ( s ) = 1.4996 + + 0.0409s1.05 (16)
s1.05
The optimum values of γ and p are identified as 0.35 and 1.01. The closed loop system gives
good servo response with the proposed method. This is true with the lower values of %OS,
ST, IAE and TV given in Table 2. The unit step response with disturbance applied at t = 6 s
is shown in Figure 9 and the corresponding performance measures are given in Table 3. The
proposed method gives better servo response, which is evident with lower values of perform-
ance measures, while the regulatory performance is almost the same for both the control-
lers. The system response for perturbations is presented in Figure 10. Figure 11 presents the
closed loop response for white noise in the output. The proposed method continues to give
the superior performance compared to the Patre (2017) method, which is clear with the lower
values of IAE and TV (Table 3).
The closed loop robust stability for uncertainties in K and L is illustrated through the mag-
nitude plot in Figure 12. The closed loop system gives robust performance up to +100% uncer-
tainty in time delay and +10% uncertainty in gain with the proposed controller, whereas the
stability condition fails for +90% uncertainty in time delay with the Patre (2017) method. Fig-
ure 13 and Figure 14 show the trends of IAE and TV for servo and regulatory response with
increase in L/T ratio. The proposed method shows less control effort for servo and regulatory
response for the entire variation of L/T ratio. The trend followed by IAE for set point tracking
is almost the same up to L/T ratio of 1 for both the methods; after that, it starts increasing with
the old method. In the case of disturbance rejection, the IAE values are lower up to L/T ratio
of 1.3 with the old method, and then it increases. Hence, the proposed method is a good choice
to have a better control for increasing L/T ratio, compared to the old method (Patre, 2017).
517
518
5.3 Example 3
The higher order system studied in Panagopoulos et al. (2002) is considered as the third
example:
1 0.99932
G3 ( s ) = = e− . s (17)
( s + 1)(0.2 s + 1)(0.04 s + 1)( 0.008s + 1) 1.0842 s1.01332 + 1 0 1922
The proposed and old (Patre, 2017) controllers are given as follows:
1 0.0961s + 1
C proposed ( s ) = × 0.0961 ( 1 + 1.0842 s ) (18)
1.0132
0.0096 s1.1 + 0.1s 0.1 + 0.1922 0.0961s
6.14
Cold ( s ) = 5.0034 + + 0.0163s1.1 (19)
s1.1
The values of γ and p for the proposed method are 0.1 and 1.1. The performance measures
shown in Table 2 for servo response indicates that the %OS, ST and IAE values are lower
than with the proposed method, but that the TV value is slightly higher compared to the
old (Patre, 2017) method. Similarly, the step response for a change in disturbance applied at
t = 5 s is shown in Figure 15. The closed loop step response for process parameter variations
and for output noise is illustrated in Figure 16 and Figure 17. The corresponding perform-
ance measures for all the above cases are presented in Table 3. It is evident from all these
Figures that the proposed method gives superior servo performance but is a bit slow in reject-
ing the disturbance compared to the old method.
The proposed method gives robust performance up to an uncertainty of +70% in L and
+10% in K, while the Patre (2017) method fails for less than +50% uncertainty in L (Figure 18).
Figure19 and Figure 20 show the performance for variation of L/T ratio. For servo response,
the variation of IAE is low with the proposed method, while the control effort is slightly high
519
520
up to L/T = 0.9, and then it increases drastically with the old method. In the case of regula-
tory control, the IAE values are higher with the proposed method and the variation of TV
is low. Hence, there is a trade-off between IAE and TV for increasing L/T, and the proposed
method is recommended for servo response, while it can be used for disturbance rejection at
higher values of L/T.
6 CONCLUSIONS
In this paper, a FFFOPID controller is proposed for higher order systems approximated as
NIOPTD models using IMC method. Analytical method is followed for identifying the tun-
ing parameters by minimizing IAE and TV. Enhanced closed loop performance is observed
with the proposed method for changes in set point and disturbance. In particular, the pro-
posed method is effective in terms of there being less control effort. The closed loop system is
robust with the proposed method for high uncertainty in process parameters. Also, the pro-
posed method assures better control for large changes in time delay, which is proved through
the variation of L/T ratio.
REFERENCES
Aström, K.J. & Hägglund, T. (1995). PID controllers: Theory, design, and tuning. Research Triangle
Park, NC: ISA.
Bongulwar, M.R. & Patre, B.M. (2017). Stability regions of closed loop system with one non-integer
plus time delay plant by fractional order PID controller. International Journal of Dynamics and Con-
trol, 5(1), 159–167.
521
522
ABSTRACT: In the petroleum industry, oily water occurs in the stages of production, trans-
portation and refining, as well as during the use of derivatives. Crude oil is one of the major
components of wastewater from the petroleum industry. The project focuses on the use of the
Electro-Fenton (EF) technique for the removal of oil content from wastewater from the petro-
leum industry. Hydrogen peroxide is used as Fenton’s reagent and Fe2+ is provided from sacri-
ficial cast iron anodes. The crude oil samples are taken from the Cochin refinery. Experimental
study has been conducted to investigate the influence of various factors, for instance current
density, time of electro-Fenton, feed pH, and H2O2 concentration in the electro-Fenton process.
1 INTRODUCTION
Petroleum products play an unavoidable role in our daily lives and our demand is increasing day
by day. Petroleum products include transportation fuels, fuel oils for heating and electricity gener-
ation, and feed stocks for making the chemicals, plastics, and synthetic materials that are in nearly
everything we use. Of the approximate 7.19 billion barrels of total US petroleum consumption in
2016, 48% was motor gasoline (including ethanol), 20% was distillate fuel (heating oil and diesel
fuel), and 8% was jet fuel. So, these products have become essential in our routine lives.
These petroleum products are made from crude oil through a refining process. During the
refining process, petroleum refineries unavoidably generate large amounts of oily wastewater.
These wastes occur at different stages of oil processing, such as during production, transpor-
tation and refining. However, during the production phase large amounts of oily wastes are
generated, which become mixed with the sea water and cause pollution. Coelho et al. (2006)
reported that the quantity of water used in the oil refinery processing industry during the
production stage ranges from 0.4 to 1.6 times the volume of processed oil, and this wastewa-
ter may, if untreated, cause serious damage to the environment.
The presence of oil and grease in the water bodies accounts for a major part of water pol-
lution. Alade et al. (2011) explains the effects of these to the economy. The oil and grease will
form a thin oily layer above the water medium, which reduces the light penetration into the
water medium, thereby decreasing photosynthesis. Thus, it affects the survival of aquatic life
in water since the amount of dissolved oxygen in the water is less. It also affects the aerobic
and anaerobic wastewater treatment process due to the reduction in oxygen transfer rates,
and also due to the reduction in the transport of soluble substrates to the bacterial biomass.
So, more attention must be given to the treatment of oily wastewater. There are various
methods available for oil removal from wastewater, such as physical treatment, chemical
treatment, biological treatment, membrane treatment and advanced oxidation processing
(Krishnan et al., 2016).
The motivation of this project is the application of the Electro-Fenton (EF) process for
large-scale industries and wastewater treatment plants. Electro-Fenton treatment is regarded
as being a better mechanism for water treatment units.
523
RH + HO → R + H2O
• •
(2)
R + Fe3+ → R+ + Fe2+
•
(3)
The project mainly focuses on the study of different parameters that affect the electro-Fenton
process for removal of oil and grease content from crude oil wastewater. The electro-Fenton
process has two different configurations. In the first, Fenton reagents is added to the reac-
tor from outside and inert electrodes with high catalytic activity are used as anode material.
While in the second configuration, only hydrogen peroxide is added from outside and Fe2+
is provided from sacrificial cast iron anodes (Nidheesh & Gandhimathi, 2012). Here we are
using the second configuration. In this project we are using mild steel/iron as the anode and
stainless steel as the cathode. Therefore, an additional supplement of Fe2+ is not needed.
524
2.2.2 Experiment
Before moving to the experiment, first the electrodes are cleaned using dilute HCl. For that,
10 ml dilute HCl is used to clean the electrodes and the weights of the anodes and cathodes
are noted. Then we move on to the Fenton process. For that, 2 liters feed is taken from the
reactor. The electrodes (cathodes and anodes) are arranged bipolar. Electrodes are connected
to the DC supply. Current and voltage are adjusted. The experiment is run according to the
values of the parameters (current density, H2O2 concentration, pH, time) in the experiment
list. The salt concentration of the feed is 1.5 g.
After the experiment, the samples are collected and analyzed using COD value, fluores-
cence spectrometry and gravimetric analysis.
525
The project focuses on removal of oil and grease from crude oil wastewater by the electro-
Fenton process. COD and analysis of oil and grease using a fluorescence spectroscope and
gravimetric analysis are used to estimate the percentage oil removal in each experiment. The
samples from each experiment are analyzed using COD and fluorescence spectroscope and
gravimetric method and the percentage removal of oil and grease is found by comparing with
the feed values. The COD value gives the amount of organic substances found in the water.
Therefore, it is an important measure of water quality.
3.2.1 Influence of pH
The pH value will affect the oxidation and coagulation of the electro-Fenton process. The
impact of initial pH value on percentage removal was studied. The initial pH was adjusted
using 0.1 N HCl and 0.1 N NaOH. The initial pH value was found using a digital pH meter.
526
The experiment was conducted at current density 0.5 A/dm2, reaction time of 20 mins, and
H2O2 concentration 0.275 g/l.
The initial pH plays an important role in the electro-Fenton process. Figure 3 shows the percent-
age of oil removal using COD and fluorescence analysis. The study was conducted at a pH range
from 2–7. Generally, the Fenton process was conducted at a pH below 7 (i.e. at acidic medium).
From the results it was clear that the maximum removal was obtained at a pH around
4. The removal becomes less effective at a pH < 3. It is due to regeneration of Fe2+ ions,
through reaction between Fe3+ and H2O2. At higher pH the removal also decreases rapidly:
at a pH > 5, the % removal was found to decrease to a value of 50%. It is due to the fact that
H2O2 is unstable in basic solution (Nidheesh & Gandhimathi, 2012).
527
4 CONCLUSION
The work focused on the removal of oil and grease from crude oil processed wastewater
releasing from refineries. Crude oil wastewater from BPCL, Kochi was selected as the sample
for treatment purpose. For the removal of oil and grease from crude oil processed wastewater,
the electro-Fenton technique was used. Both oxidation and coagulation contributed to COD
removal through Fenton treatment of wastewater.
COD test and fluorescence spectrometry analysis were the major analyses conducted for
the treated water. Electro-Fenton experiments were conducted with mild steel and stainless
steel electrodes. The important parameters affecting the EF process were analyzed, –viz. pH,
current density, H2O2 concentration and time. At pH 4 percentage removal was maximum,
and decreased with an increase and decrease in the pH of water. The oil removal increases
with increase in current density. At highest current density, the maximum removal efficiency
was obtained. In the case of H2O2 concentration, the maximum removal was obtained at
0.55 g/l with a reaction time of 30 mins.
REFERENCES
Alade, A.O., Jameel, A.T., Muyibi, S.A., Karim, M.I.A., & Alam, Z. (2011). Application of semifluidized
bed bioreactor as novel bioreactor system for the treatment of palm oil mill effluent (POME). Afri-
can Journal of Biotechnology, 10(81), 18642–18648.
528
529
ABSTRACT: The use of a large-pore catalyst with a high surface area, MCM-41, loaded
with alumina, is explored for the production of ethylene and propylene from methanol con-
version. MCM-41 is a silica-based catalyst with low Lewis acid site strength. Al/MCM-41
was treated with boric acid at three different concentrations. A modified catalyst was char-
acterized using BET, chemisorptions, XRD and SEM. The total surface area was observed
to reduce after boric acid treatment with the treatment of MCM-41. The maximum decrease
in surface area was obtained with the treatment of Al/MCM-41 with boric acid (1M). An
N2 adsorption-desorption plot shows a change in the porous structure of the catalyst after
treatment with boric acid. The conversion studies were performed at different temperatures
between 250–450°C and liquid flow rate in the range of 30 to 120 ml/min. The effect of the
catalyst on the selectivity of ethylene and propylene was studied with Al/MCM-41 and B-Al/
MCM-41. Results showed that the boric acid treated Al/MCM-41 helps to increase the selec-
tivity of the catalyst toward propylene production. Gas yield was also observed to increase
after using the boric acid treated catalyst. A 20.3% decrease in the coke yield was observed
when the experiments were performed with the boric acid treated catalyst as compared to an
untreated catalyst. However, no significant effect on the coke production was obtained in the
presence of three boric acid treated Al/MCM-41 catalysts.
1 INTRODUCTION
In the recent past, the scientific community has been putting more effort into looking for
alternative routes for the production of feedstock materials required in the petrochemical
industries. At present, olefins are mainly produced from methanol conversion. The main
source of methanol is methane, obtained from the petroleum refineries (Tian et al., 2015).
However, as petroleum refineries are now being modified to process the gas obtained from
the natural resources, new and effective technologies are needed to convert methane to meth-
anol and further feedstock materials such as DME and formaldehyde. In coming years, more
methane will be obtained from shale gas or gas hydrates (Lefevere et al., 2014).
Zeolites such as ZSM-5 has been widely used throughout the world, due to its shape selec-
tivity, durability and reusability for a wide range of reactions in the petroleum refinery and
petrochemical industries (Khare et al., 2017). However, the small size of cage and small pore
size of ZSM-5 are its major drawbacks in the Methanol to Olefin (MTO) process, wherein
large-size products cannot escape from the small cage opening, thereby deactivating the cata-
lyst. To overcome the fast deactivation issue of catalysts, a large-pore silica-based mesopo-
rous catalyst has been used for different reactions at various laboratories in the last decade,
and is a subject of interest (Wu et al., 2012). These catalysts have both two-dimensional and
three-dimensional structures with large-pore diameters and high surface area (e.g. SBA-15,
MCM-41, MCM-22, FDU-13 and MFU). However, these catalysts lack mechanical strength
and have low acid site concentrations required for reactions (Olsbye et al., 2012; Li et al.,
531
532
obtained at 500°C indicates the strong acidic sites. In general, Al impregnated MCM-41 has a
higher strength of strong acid site, which was observed to decrease after boric acid treatment.
A slight enhancement was observed after boric acid treatment. Based on the strength of
weak acid site concentration and strong site concentration, Al/MCM-41 treated with boric
acid (1.5 M) was selected for further studies.
3.2.2 Methanol conversion and gaseous product yield at different flow rate
The conversion studies were performed with 0.5 g catalyst but at a flow rate range from
30 ml/hr to 120 ml/min, keeping the nitrogen flow rate at 50 ml/min and a methanol-to-
water ratio of 1:2 (Figure 4). The maximum methanol conversion (89.2%) was obtained
at 90 ml/hr with B-Al/MCM-41, whereas with Al/MCM-41 catalyst the maximum conver-
sion was 83.4%. A similar pattern in gaseous products yield was observed (90.3%), whereas
maximum gas yield with Al/MCM-41 was 85.3%. Lower liquid yield was obtained in both
the cases.
Selectivity (%)
mentioned by a few reporters that a moderate decrease in strong acid site presence due to
decrease in alumina sites is responsible for increase in selectivity for propylene.
4 CONCLUSIONS
The conversion of methanol for the production of ethylene and propylene was performed
in a fixed-bed reactor. The highest conversions were obtained at 450°C and at a flow rate of
90 ml/min. Alumina-loaded MCM-41 was treated with boric acid and the selectivity of pro-
pylene was observed to increase from 8.1% to 15.3%. The selectivity of ethylene was observed
to decrease and was mainly observed to promote the propylene yield.
REFERENCES
Abrokwah, R.Y., Deshmane, V.G. & Kuila, D. (2016). Comparative performance of M-MCM-41 (M:
Cu, Co, Ni, Pd, Zn and Sn) catalysts for steam reforming of methanol. Journal of Molecular Cataly-
sis A: Chemical, 425, 10–20.
Almutairi, S.M.T., Mezari, B., Pidko, E.A., Magusin, P.C.M. & Hensen, E.J.M. (2013). Influence of
steaming on the acidity and the methanol conversion reaction of HZSM-5 zeolite. Journal of Cataly-
sis, 307, 194–203.
Bhattacharyya, K.G., Talukdar, A.K., Das, P. & Sivasanker, S. (2003). Al-MCM-41 catalysed alkylation
of phenol with methanol. Journal of Molecular Catalysis A: Chemical, 197(1–2), 255–262.
Du, G., Lim, S., Yang, Y., Wang, C., Pfefferle, L. & Haller, G.L. (2006). Catalytic performance of vana-
dium incorporated MCM-41 catalysts for the partial oxidation of methane to formaldehyde. Applied
Catalysis A: General, 302(1), 48–61.
Epelde, E., Santos, J.I., Florian, P., Aguayo, A.T., Gayubo, A.G., Bilbao, J. & Castañoa, P. (2015). Con-
trolling coke deactivation and cracking selectivity of MFI zeolite by H3PO4 or KOH modification.
Applied Catalysis A: General, 505, 105–115.
535
536
1 INTRODUCTION
Recently, Advanced Oxidation Processes (AOPs) have emerged as promising methods for the
removal of organic pollutants from water. AOPs are based on the use of hydroxyl radicals
for oxidative disintegration of organic pollutants into environmentally benign substances
such as CO2 and H2O. Diphenamid (DPA) is a herbicide used for controlling annual grasses
and weeds in tomato, potato, peanut, and soybean plants (Schultz & Tweedy, 1972; Sirons
et al., 1981). As in the case of other pesticides and herbicides, DPA also enters into the water
bodies and poses a threat to the environment in general, and to the aquatic organisms in par-
ticular. Therefore, the development of effective methods for remediation of polluted water
containing even trace amounts of DPA is significant. Researchers have established that toxic
pollutants impact on the health of the ecosystem and present a threat to humans through the
contamination of drinking water supplies (Eriksson et al., 2007).
Several researchers had studied the photochemical degradation of DPA in aqueous solu-
tion. Rosen (1967) studied the homogeneous photodegradation of DPA by UV and sun-
light irradiation. Rahman et al. (2003) investigated the photocatalytic degradation of DPA in
aqueous P25 TiO2 suspension under the illumination of a medium-pressure mercury lamp.
Liang et al. (2010) studied the homogeneous and heterogeneous degradations of DPA in
aqueous solution by direct photolysis with UVC (254 nm) and by photocatalysis with TiO2/
UVA (350 nm). H2O2-based AOP studies on the degradation of diphenamid have not been
reported by researchers so far.
The objective of the present work is to study the application of UV/H2O2-based AOP for
the removal of DPA from water. Degradation of the pesticide is not the only concern for us,
but the compounds and the intermediates which are formed during the course of these reac-
tions are also of utmost importance. Hence, we have employed the most advanced analytical
537
2.1 Materials
The diphenamid we used was 99.99% pure, purchased from Sigma Aldrich.
The hydrogen peroxide solution was standard lab quality, which is 20 v/v.
2.2 Methods
2.2.1 Extent of degradation analysis
2.2.1.1 Preparation of standard solution for extent of degradation analysis
DPA solution of 1,000 ppm (1 g of diphenamid dissolved in 1,000 ml solution) was magneti-
cally stirred for 60 min. All reactions were performed at room temperature.
2.2.1.2 TOC analysis of sample with UV degradation
About 200 ml of the DPA standard solution was taken in a beaker and placed in a UV reac-
tor. The solution was thoroughly stirred using a magnetic stirrer. The reactor started, and the
samples were collected from the solution at regular intervals of time.
2.2.1.3 TOC analysis of sample with H2O2 degradation
About 200 ml of the DPA standard solution was taken in a beaker, placed on a magnetic stir-
rer and thoroughly stirred. To this solution 10 microliters of 20 v/v H2O2 was added and the
initial sample was collected. Then the timer was started, and the samples were collected from
the solution at regular intervals.
2.2.1.4 TOC analysis of sample with H2O2 and UV degradation
About 200 ml of the DPA standard solution was taken in a beaker and placed in a UV reac-
tor. The solution was thoroughly stirred using a magnetic stirrer. To this solution 10 microlit-
ers of 20 v/v H2O2 was added and the initial sample was collected. The timer was started, and
the samples were collected from the solution at regular intervals.
2.2.1.5 Analytical method
The samples obtained are analyzed using a TOC analyzer. Initially the analyzer was calibrated
using a blank sample. After that, each sample was analyzed using a suitable method. Here,
the method used was the Non-Purgeable Organic Carbon (NPOC) measurement method.
538
539
Figure 3. TOC analysis data of sample Figure 4. Cn/C0 data plot for extent of degra-
treated with hydrogen peroxide/UV. dation analysis.
540
541
4 CONCLUSIONS
The results of the present study confirm that a diphenamid sample with added hydrogen per-
oxide and irradiated by UV has the maximum degradation in the prescribed time of five hours.
The addition of hydrogen peroxide can give considerable degradation, but hydrogen peroxide
together with UV can give maximum degradation as well as a better degradation scheme,
since hydrogen peroxide can degrade pesticides and UV can destroy microorganisms.
However, the knowledge about the compounds formed due to the degradation of diphena-
mid and the behavior and properties of the intermediate compounds in the reaction pathway
are beyond the scope of this work, which will definitely serve as a scope for future investiga-
tions in this topic.
ACKNOWLEDGMENT
One of the authors (Manju M.S.) acknowledges with thanks the financial assistance received
from Centre for Engineering Research (CERD), Government of Kerala.
REFERENCES
Eriksson, E., Baun, A., Mikkelsen, P.S. & Ledin, A. (2007). Risk assessment of xenobiotics in stormwa-
ter discharged to Harrestup Å, Denmark. Desalination, 215(1–3), 187–197.
Liang, H.C., Li, X.Z., Yang, Y.H. & Sze, K.H. (2010). Comparison of the degradations of diphenamid
by homogeneous photolysis and heterogeneous photocatalysis in aqueous solution. Chemosphere,
80(4), 366–374.
Rahman, M.A., Muneer, M. & Hahnemann, D. (2003). Photocatalysed degradation of a herbicide
derivative, diphenamid in aqueous suspension of titanium dioxide. Journal of Advanced Oxidation
Technologies, 6(1), 100–108.
Rosen, J.D. (1967). The photolysis of diphenamid. Bulletin of Environmental Contamination and Toxi-
cology, 2(6), 349–354.
Schultz, D.P. & Tweedy, B.G. (1972). Effect of light and humidity on absorption and degradation of
diphenamid in tomatoes. Journal of Agriculture and Food Chemistry, 20(1), 10–13.
Sirons, G.J., Zilkey, B.F., Frank, R. & Paik, N.J. (1981). Residues of diphenamid and its phytotoxic
metabolite in flue-cured tobacco. Journal of Agriculture and Food Chemistry, 29(3), 661–664.
542
ABSTRACT: Water is an essential element to sustain life. To ensure this requires a safe,
adequate and accessible supply. Therefore, efforts should be made to achieve a standard
drinking water quality. This is achieved through the use of water treatment plants, where
the major objective is the removal of pathogenic microorganisms to prevent the spread of
water-borne diseases. It is important that water treatment works be equipped with adequate
disinfection systems. Disinfection processes can result in the formation of both organic and
inorganic Disinfection By-Products (DBPs). The most well-known of these are the organo-
chlorine by-products such as Trihalomethane (THM) compounds. THM concentrations in
drinking water are measured using Gas Chromatography (GC) at various places in Thrissur
City, including at the Government Engineering College, which has its water treated at the
Peechi water treatment plant. A study has been conducted on parameters that affect
the formation of THMs. Response surface designs have been created for each THM using the
design of an experiment tool in Minitab 17, with the most influencing parameters being
observed from parameter study.
1 INTRODUCTION
There are many sources of contamination in drinking water, ranging from natural substances
leaching from soil to harmful chemical discharges from industrial plants. In developing coun-
tries, nearly half of the population is suffering due to lack of potable water, or due to con-
taminated water (WHO, 1992). Since disinfection is the most popular step in the treatment of
water, the cheapest method is preferred. Hence, chlorination is the most common disinfection
method as it remains in water until it has been consumed (Sadiq & Rodriguez, 1999). How-
ever, this poses a chemical threat to human health as it reacts with organic matter available in
the water to produce harmful products.
During the chlorination of water containing organic matter, different Disinfection By-Prod-
ucts (DBPs) are formed and more than 300 different varieties have been identified (Becher,
1999). In the range of 37–58% of the total measured halogenated by-products are trihalometh-
anes. Trihalomethanes (THMs) are a group of four volatile compounds that are formed when
chlorine reacts with organic matter present in the water (Frimmel & Jahnel, 2003).
The THMs include: Trichloromethane (Chloroform-CF), Bromodichloromethane
(BDCM), Dibromochloromethane (DBCM) and Tribromomethane (Bromoform-BF).
These are classified as possible human carcinogens by the US Environment Protection Act
(USEPA) (US Environmental Protection Agency, 1990). In the USEPA guidelines, the maxi-
mum contaminant level specified in the DBP Stage I Rule is 80 µg/L for total THMs (US
Environmental Protection Agency, 1998). The formation of these compounds depends on
several other factors such as temperature, pH, disinfectant dose, contact time, inorganic
compounds and organic matter present in the drinking water supply (Bull et al., 1995; Wu
et al., 2001; Bach et al., 2015).
543
2.1 Chemicals
• Certified reference material of THM 2,000 µg mL–1 in methanol was purchased from
Supelco, USA.
• n-hexane of HPLC grade (99%) was purchased from Sigma-Aldrich.
• Hypochlorite solution (13%) was purchased from suppliers.
• Working standards were prepared using n-hexane.
• Other chemicals (e.g. NaOH, H2SO4, NH4Cl) were analytical grade.
Parameter Conditions
544
545
Domain
Parameters Factor -1 1
The three level second-order designs demand comparatively less experimental data to ena-
ble precise prediction. In the Box-Behnken method a total number of 27 experiments, includ-
ing three center points, are carried out to estimate the formation of THM. The quality of the
fit of this model is expressed by the coefficient of determination R2. The concentration of
each THM after the chlorination experiment is subtracted from its level before chlorination,
and the concentrations are given in Table 4.
Models have been created to determine the concentration or quantity of each compound
based on the data. Regression equations for each component in uncoded units are given by:
a. Chloroform
CF formed = −0.435 + 0.000427 a −0.201 b + 0.0388 c + 0.00419 d + 0.000001 a*a −0.000480
c*c + 0.000079 d*d −0.000022 a*d + 0.00389 b*c + 0.00201 b*d −0.000194 c*d.
546
a b c d THM Conc.
Analytes (ppm) (ppm) (°C) (min) (ppb)
547
4 CONCLUSION
Disinfection is a crucial and necessary step in the drinking water treatment process. Trihalom-
ethanes are the major disinfection by-products formed in drinking water when organic matter
present in the water reacts with chlorine. It is very toxic and carcinogenic and causes severe health
effects. THM concentrations are determined from water samples taken at two-week intervals
from various parts of Thrissur City. The deviations from mean concentration values are not at all
high. The concentrations of all the components are below that of the highest permissible level.
COD, HOCl concentration, temperature and reaction time are observed to be more influ-
encing parameters from the parameter study. Response surface designs were created using
Minitab 17 for all components. By knowing the parameters’ values, uncoded coefficients and
regression equation, it is possible to calculate the formation of individual THM components
with around 83% reliability. The Minitab response optimizer tool gave optimized values of:
COD 646.46 ppm, HOCl concentration 0.20 ppm, temperature 50°C, and reaction time
90 minutes, for the minimal formation of THMs. The Microsoft Excel solver tool optimized
the values of parameters to achieve zero THM.
REFERENCES
Bach, L., Garbelini, E.R., Stets, S., Peralta-Zamora, P. & Emmel, A. (2015). Experimental design as a
tool for studying trihalomethanes formation parameters during water chlorination. Microchemical
Journal, 123, 252–258.
Becher, G. (1999). Drinking water chlorination and health. CLEAN–Soil, Air, Water, 27(2), 100–102.
Bull, R.J., Birnbaun, L.S., Cantor, K.P., Rose, J.B., Butterworth, B.E., Pegram, R. & Tuomisto, J. (1995).
Water chlorination: Essential process or cancer hazard? Toxicological Sciences, 28(2), 155–166.
Frimmel, F.H. & Jahnel, J.B. (2003). Formation of haloforms in drinking water. In Haloforms and
Related Compounds in Drinking Water, 5(Part G) (pp. 1–19). Springer, Berlin: Heidelberg.
Kim, J., Chung, Y., Shin, D., Kim, M., Lee, Y., Lim, Y. & Lee, D. (2002). Chlorination by-products in
surface water treatment process. Desalination, 151(1), 1–9.
Sadiq, R., & Rodriguez, M.J. (2004). Disinfection by-products (DBPs) in drinking water and predictive
models for their occurrence: A review. Science of the Total Environment, 321(1–3), 21–46.
Siddique, A, Saied, S., Mumtaz, M., Hussain, M.M. & Khwaja, H.A. (2015). Multipathways human
health risk assessment of trihalomethane exposure through drinking water. Ecotoxicology and Envi-
ronmental Safety, 116, 129–136.
The Environmental Protection Agency (1998). Water Treatment Manual: Disinfection.
US Environmental Protection Agency. (1990). Risk assessment, management and communication of
drinking water contamination, EPA/600/4-90/020. Washington, DC.
US Environmental Protection Agency. (1998). National primary drinking water regulations; disinfectants
and disinfection by-products; final rule, fed. regist., 63(241), 69389–69476.
World Health Organization. (1992). Our planet our health: report of the WHO commission health and
environment. Geneva: World Health Organization.
World Health Organization. (2008). Guidelines for drinking-water quality [electronic resource]: 1st and
2nd addenda, vol. 1, Recommendations.
Wu, W.W., Benjamin, M.M., & Korshin, G.V. (2001). Effects of thermal treatment on halogenated dis-
infection by-products in drinking water. Water Research, 35(15), 3545–3550.
548
Ann M. George
Department of Chemical Engineering, University of Kerala, Kerala, India
K.B. Radhakrishnan
Department of Chemical Engineering, TKM Engineering College, Kollam, India
A. Jayakumaran Nair
Department of Biotechnology, Kariavattom Campus, Thiruvananthapuram, India
ABSTRACT: Perchlorates are highly soluble anions and are used as ingredients in solid
rocket fuels, fireworks, missiles, batteries etc. The potential human risk of perchlorate expo-
sures includes effects on nervous system, inhibition of thyroid activity and mental retarda-
tion in infants. Various materials and techniques have been used to remove perchlorate from
drinking water. For light polluted water of perchlorate, adsorption seems to be one of the
most attractive, easiest, safest and cost effective physio-chemical treatment methods espe-
cially for drinking water. Rice husk, one of the major bi-products of rice milling industry can
be used as a low cost adsorbent for perchlorate removal.
Present study deals with adsorption of perchlorate using cationic modified rice husk by
optimizing various parameters like pH, adsorbent mass, adsorbate concentration, tempera-
ture and time of adsorption. The surface charge is the major governing factor for perchlo-
rate removal compared to surface area. To enhance the adsorption capacities, modifications
with cationic surfactants were made. Powdered rice husk was surface modified with Cetyl
Trimethyl Ammonium Bromide (CTAB). The adsorption of perchlorate was studied experi-
mentally after surface modifications and the different parameters including pH, adsorbent
mass, adsorbate concentration, temperature and time were optimized. The performance of
adsorption was found at different conditions and it has been observed that more than 97%
adsorption efficiency was achieved in the perchlorate removal.
1 INTRODUCTION
Perchlorate (ClO4−) is highly soluble anion that consists of a central chloride atom surrounded
by four oxygen atoms (John. D. Coates et al. 2004). Perchlorate salts have been manufactured
and used as ingredients in solid rocket fuels highway safety flares, airbag inflators, fireworks,
missile, fuels, batteries, matches (Mamie N.I. et al. 2014). It mostly exists as ammonium per-
chlorate, sodium perchlorate, potassium perchlorate, magnesium perchlorate and lithium
perchlorate (Yali Shi et al. 2007, Urbansky 1998). The perchlorate ion is similar in size to an
iodide ion and can therefore be taken up in place of iodide ions by the thyroid gland. Thus the
perchlorate ions disturb the production of thyroid hormones and may disrupt metabolism
in the human body and the effects can be significant in case of pregnant women and fetuses
(Urbansky 2002). The potential human risk of perchlorate exposures include effects on
nervous system, inhibition of thyroid activity and mental retardation in infants (Z. Li, et al.
2000) Various materials and techniques have been used to remove perchlorate from drinking
water. These technologies are classified physical removal by the sorption on materials, chemi-
cal reduction by metal, biodegradation by bacteria and electrochemical reduction on metal
electrodes and integrated techniques. Now a days the better method for the treatment of
549
kept for shaking. After 1 hr incubation the samples were filtered and analysed the residual
perchlorate.
perchlorate. It can be noted that the percentage removal of perchlorate increases and reaches
maximum at pH-4 and then decreases. This behavior can be due to the effect of pH solution
on the charge of functional group of rice husk, and thus become more effective to adsorption
in acidic pH. The modified rice husk containing a positive charge in this acidic pH is more
effective for the adsorption of maximum amount of perchlorate.
4 CONCLUSION
This study deduces that the cationic surfactant modified rice husk powder can be used as a
very low cost adsorbent for the removal of perchlorate contaminated water. From the results
obtained it can be concluded that rice husk is a good adsorbent due to its ability to remove
the perchlorate in water even at low concentrations. The experiment showed that at low pH
the adsorption level was maximum because CTAB modified rice husk containing a positive
charge which increases the adsorption rate of negatively charged.
REFERENCES
[1] John, D.C. & Laurie, A.A. 2004. Microbial perchlorate Reduction, Rocket fuelled metabolism.
Nature Reviews/Microbiology 2:569–580.
[2] Mamie, N.I., Kate, M.S. & Dennis, C.R. 2005. Reduction of Perchlorate and Nitrate by Microbial
Communities in Vadose Soil. Applied and Environmental Microbiology, 71(7):3928–3934.
553
554
ABSTRACT: This paper investigates the biosorption of methyl orange from aqueous
solution on cucurbita pepo leaves powder. The batch studies were carried out for the con-
tact time, (5–60 min), biosorbent dosage (0.05–0.5 g), pH (2–9), initial concentration of dye
(10–50 mg/L) and temperature, (283–323°K). The isotherms for the present study are Freun-
dlich, Langmuir and Temkin. Of these isotherms, the Freundlich isotherm was best fitted.
The kinetics were studied with pseudo first and second order. Biosorption kinetics of cucur-
bita pepo leaves were well correlated with pseudo second order.
Keywords: biosorption, methyl orange, cucurbita pepo leaves, effluent treatment, isotherms,
kinetics
1 INTRODUCTION
The development of science and technology provides many benefits to human life, but for
the effects of a negative impact on the surrounding environment, such as industrial waste
problems. Many industries use dyes in order to color their products, and then dispose of a
lot of colored wastewater as effluent. The major sources of the dyes are industries like car-
pet, leather, printing, textile. The wastewater from these industries causes hazardous health
effects to aquatic and human life.
Several processes have been applied for the treatment of dyes from wastewater, such as
chemical, biological and physical processes. Even though chemical and biological treatments
are effective for removing dyes, they still require special equipment and are considered to
be quite energy intensive in terms of the addition and large amounts of by products often
generated. In recent years, a physical method through adsorption process based on acti-
vated carbon material has been considered to be a superior technique as compared to others.
However, the commercial activated carbon is quite expensive and has limited its application.
Due to economic reasons, the discovery toward alternative adsorbents to replace the costly
activated carbon is highly recommended. Many investigators have studied the feasibility of
using inexpensive alternative materials like chitosan beads (Negrulescu et al., 2014), calcined
Lapindo (Jalil et al., 2010), chitosan intercalated montmorillonite (Umpuch & Sakaew, 2013),
activated carbon coated monolith in a batch system (Darmadi & Thaib, 2010), sawdust and
sawdust-fly ash (Lucaci & Duta, 2011), cork as a natural and low-cost adsorbent (Krika &
Benlahbib, 2015), thermally treated eggshell (Belay & Hayelom, 2014), modified activated
carbon from rice husk (Qiu et al., 2015), tree bark powder (Egwuonwu, 2013), and banana
trunk fiber (Prasanna et al., 2014), as carbonaceous precursors for the preparation of acti-
vated carbons and for the removal of dyes from water and wastewater. The present investiga-
tion is an attempt to explore the possibility of using cucurbita pepo leaves powder to remove
methyl orange in aqueous solution, since the raw material is harmless, cheaper, and plentiful.
Methyl orange is an anionic azo dye with a molecular formula C14H14N3NaO3S. The wide
usage of methyl orange was as a pH indicator. It is used in titrations because it changes
color at the pH of a mid-strength acid. Its anion form is yellow, and its acidic form is red.
555
2.2 Procedure
The methyl orange was obtained from Merck laboratories limited, Mumbai, India. A stock
solution of 1,000 mg/L was prepared by dissolving 1g of methyl orange in 1,000 mL of
distilled water, which was later diluted to required concentrations. All the solutions were
prepared using distilled water. Solution pH for pH studies was adjusted by adding HCl and
NaOH, as required. Concentrations of the dye solutions were determined from the absorb-
ance spectrum of the solution at the characteristic wavelength of dye using a double beam
UV-Visible spectrophotometer. Final concentrations were determined from the calibration
curve. The absorption wavelength of methyl orange (λmax) = 464 nm. Variables studied and
their range: Contact time: 5–60 min., Aqueous dye solution pH: 2–9, Initial concentration of
the dye: 10–50 mg/L, Biosorbent dosage: 0.05–0.2 g, Temperature: 283–323°K.
The effect of contact time was determined by shaking 0.1 g of adsorbent in 100 ml of syn-
thetic solutions methyl orange of initial dye concentration 10 mgL−1. Shaking was provided
for different time intervals like 5, 10, 15, 20, up to 60 min at a constant agitation speed of
230 rpm. The effect of pH of the dye solution was determined by agitating 0.1 g of biosorb-
ent and 100 ml of synthetic dye solutions of initial dye concentration 10 mg/L at different pH
values of the solution, ranging from 2 to 9, by adding 0.1N HCl or 0.1 N NaOH. To study
the effect of concentration, 100 ml of aqueous solution, each of methyl orange of different
dye concentrations of 10 mg/L, 20 mg/L, 30 mg/L, 40 mg/L and 50 mg/L were taken in 250 ml
conical flasks. 0.1 g/L of cucurbita pepo leaves powder was added to each of the flasks. The
total dye concentration in solution was analyzed with double beam a UV Spectrometer at a
wavelength of 464 nm for methyl orange dye solution.
556
Experimental data was generated in a batch mode of operation to study the effect of various
parameters for the removal of methyl orange from the aqueous solution (prepared in the
laboratory) using cucurbita pepo leaves powder as the biosorbent. Various experimental runs
were conducted in the present study. The parameters studied include: Contact time, t (min),
pH of the solution, initial concentration of the solution, C0 (mg/L), biosorbent dosage, w (g)
and temperature, T (°K).
% Removal =
( Co − Ct ) × 100 (1)
Co
Dye uptake ( Q ) =
( Co − Ct ) × v (2)
w × 1000
where
C0 = Initial concentration of the dye, mg/L
Ct = Final concentration of the dye after time, t
V = Volume of aqueous dye solution, ml
W = Weight of biosorbent, g
557
3.3 Effect of pH
The pH of an aqueous dye solution is an important monitoring parameter in biosorption, as
it affects the surface charge of the biosorbent material and the degree of ionization of the dye
molecule. It is also directly related to the competition ability of hydrogen ions with biosorbate
molecules to active sites on the biosorbent surface. In the present study methyl orange dye bio-
sorption data was obtained in the pH range of 2 to 9 of the aqueous solution (C0 = 10 mg/L)
using 0.1 g of 150 µm size biosorbent. The effect of pH of aqueous solution on % biosorption of
methyl orange dye is shown in Figure 4. The % biosorption of methyl orange dye was increased
from 87.90 to 93.68% as pH increased from 2 to 6, and beyond the pH value of 6 it decreased.
As the pH of the system decreased, the number of negatively charges surface sites decreased
and the number of positively charged surface sites increased and this favors the biosorption
of dye anions due to electrostatic attraction. When the acidity increased due to concentration
of H+ that will decrease the negative charge for methyl orange, then the adsorption increased.
However, when the basicity of solution was increased, the amount of biosorption decreased due
to the increase of concentration of OH. Hence the optimum pH for methyl orange is taken as 6.
558
4 BIOSORPTION KINETICS
The kinetics of the biosorption data was analyzed by two models, namely pseudo first order
and pseudo second order. These models correlate solute uptake, which is important in the
prediction of reactor volume.
where Qeq and Q are the amounts of dyes adsorbed at equilibrium time and any time t, and
K1 is the rate constant of the pseudo first order biosorption.
The above Equation can be presented as:
559
The plot of time t versus log (Qeq–Q) gives a straight line for first order kinetics, facilitating
the computation of biosorption first order rate constant (K1).
In the present study, the kinetics were investigated with 100 ml of aqueous solution
(C0 = 10 mg/L) for the contact time of 5 to 60 min. The Lagergren first order plot is drawn in
Figure 7. The first order model Equation obtained for the present study is:
(t/Q) = (1/K2Qeq2) + (1/Qeq ) t (7)
If the pseudo second order kinetics is applicable, the plot of time t versus (t/Q) gives a
linear relationship that allows computation of K2.
In the present study, the kinetics are investigated with 100 ml of aqueous solution (C0 = 10
mg/L) in the agitation time intervals of 5 min to 60 min. The pseudo second order plot of time
‘t’ versus (t/Q) is drawn in Figure 8. The second order kinetics obtained for the present study is
given as:
5 ADSORPTION ISOTHERMS
In the present study, the isotherms studies are Langmuir, Temkin and Freundlich. The linear
forms of these isotherms are obtained at room temperature and are shown in the Figures below.
560
Q max bC eq
Qeq = (9)
1 + bC eq
The above Equation can be rearranged into the following linear form as:
C eq 1 1
= + C eq (10)
q eq bQ max Q max
where
Ceq is the equilibrium concentration (mg/L)
Qeq is the amount of dye ion adsorbed (mg/g)
Qmax is Qeq for a complete monolayer (mg/g)
b is sorption equilibrium constant (L/mg)
Figure 9 is the plot of [Ceq] versus [Ceq/Qeq], which is a straight line with slope 1/Qmax and
an intercept of 1/bQmax.
The Correlation coefficient R2 = 0.9671 and the Langmuir Equation obtained for the
present study is:
(Ceq/Qeq) = 0.0107Ceq + 0.0874 (11)
561
The Equation is conveniently used in the linear form by taking the logarithm of both sides as:
The Freundlich isotherm is derived assuming a heterogeneity surface. Kf and m are indica-
tors of biosorption capacity and biosorption intensity respectively. The value of m should lie
between 1 and 10 for favorable biosorption.
Figure 10 is a plot of log [Ceq] versus log [Qeq], which is a straight line with a slope of 1/m
and an intercept of log (Kf).
From the value of biosorption intensity, it can be concluded that the Freundlich isotherm
indicates for favorable biosorption. The Freundlich Equation obtained for the present study
is shown by:
562
R *T
Qeq = ln[AT * Ceq ] (15)
bT
where
R = Universal gas constant (8.314 J/mol.K)
T = Temperature of dye solution, K
AT, bT = Temkin isotherm constants
This can be written as:
R *T R *T
Qeq = ln Ceq + ln [ AT ] (16)
bT bT
Figure 11 shows a plot of ln[Ceq] versus Qeq, which is a straight line with slope of RT/bT
and intercept of RT/bT ln[AT]. The Temkin Equation obtained for the present study is:
The isotherm constants obtained for various isotherm models are shown in Table 1.
The correlation coefficients obtained from the Langmuir, Freundlich, and Temkin models
were 0.9671, 0.9981, 0.959 respectively for methyl orange, and the Freundlich Equation was
observed to be more suitable, followed by Langmuir and Temkin for the experimental data
of methyl orange dye.
The maximum metal uptakes for the biosorption of the methyl orange by using various
other biosorbents are tabulated in Table 2.
ISOTHERM CONSTANT R2
563
The following conclusions can be drawn from the above discussion. Methyl orange is removed
efficiently by the cucurbita pepo leaves. The optimum contact time for the process is 40min
at room temperature. The optimum pH is 6 and the optimum dosage is 0.1g. The biosorp-
tion favors the increase in the temperature. The adsorption kinetics are better described with
pseudo second order kinetics. The isotherm studies are best fitted for the Freundlich iso-
therm, followed by the Langmuir and Temkin isotherms.
REFERENCES
Belay, K. & Hayelom, A. (2014). Removal of methyl orange from aqueous solutions using thermally
treated egg shell (locally available and low cost biosorbent). International Journal of Innovation and
Scientific Research, 8(1), 43–49. ISSN 2351-8014.
Chaidir, Z., Sagita, D.T., Zein, R & Munaf, E. (2015). Bioremoval of methyl orange dye using durian
fruit (durio zibethinus) murr seeds as biosorbent. Journal of Chemical and Pharmaceutical Research,
7(1), 589–599.
Danish, M., Hashim, R., Ibrahim, M.N.M. & Sulaiman, O. (2013). Characterization of physically acti-
vated acacia mangium wood-based carbon for the removal of methyl orange dye. BioResources, 8(3),
4323–4339.
Darmadi, D. & Thaib, A. (2010). Adsorption of anion dye from aqueous solution by activated carbon
coated monolith in a batch system. Jurnal Rekayasa Kimia dan Lingkungan, 7(4), 170–175. ISSN
1412–5064.
Deniz, F. (2013). Adsorption properties of low-cost biomaterial derived from Prunus amygdalus L. for
dye removal from water. The Scientific World Journal, 961671. Egwuonwu, P.D. (2013). Adsorption
of methyl red and methyl orange using different tree bark powder. Academic Research International,
4(1), 330.
Gong, R., Ye, J., Dai, W., Yan, X., Hu, J., Hu, X., & Huang, H. (2013). Adsorptive removal of methyl
orange and methylene blue from aqueous solution with finger-citron-residue-based activated carbon.
Industrial & Engineering Chemistry Research, 52(39), 14297–14303.
Jalil, A.A., Triwahyono, S., Adam, S.H., Rahim, M.D., Aziz, M.A.A., Hairom, N.H.H., &
Mohamadiah, M.K.A. (2010). Adsorption of methyl orange from aqueous solution onto calcined
Lapindo volcanic mud. Journal of Hazardous Materials, 181(1–3), 755–762.
Krika, F. & Benlahbib, O.E.F. (2015). Removal of methyl orange from aqueous solution via adsorp-
tion on cork as a natural and low-cost adsorbent: Equilibrium, kinetic and thermodynamic study of
removal process. Desalination and Water Treatment, 53(13), 3711–3723.
Lucaci, D. & Duta, A. (2011). Removal of methyl orange and methylene blue dyes from wastewater
using sawdust and sawdust-fly ash as sorbents. Environmental Engineering and Management Journal,
10(9), 1255–1262.
Negrulescu, A., Patrulea, V., Mincea, M., Moraru, C. & Ostafe, V. (2014). The adsorption of tartrazine,
congo red and methyl orange on chitosan beads. Digest Journal of Nanomaterials and Biostructures,
9(1), 45–52.
Prasanna, N, Manivasagan, V., Pandidurai, S., Pradeep, D. & Leebatharushon, S.S. (2014). Studies on
the removal of methyl orange from aqueous solution using modified banana trunk fibre. Interna-
tional Journal of Advanced Research, 2(4), 341–349.
Qiu, M.Q., Xiong, S.Y., Wang, G.S., Xu, J.B., Luo, P.C., Ren, S.C. & Wang, S.B. (2015). Kinetic for
adsorption of dye methyl orange by the modified activated carbon from rice husk. Advance Journal
of Food Science and Technology, 9(2), 140–145. ISSN: 2042-4868, e-ISSN: 2042-4876.
Su, Y., Jiao, Y., Dou, C. & Han, R. (2014). Biosorption of methyl orange from aqueous solutions
using cationic surfactant-modified wheat straw in batch mode. Desalination and Water Treatment,
52(31–33), 6145–6155.
Umpuch, C. & Sakaew, S. (2013). Removal of methyl orange from aqueous solutions by adsorption
using chitosan intercalated montmorillonite. Songklanakarin Journal of Science & Technology, 35(4),
451–459.
564
C. Megha
Government Engineering College, Kozhikode, India
K. Sachithra
Government Engineering College, Thrissur, India
Sanjay P. Kamble
National Chemical Laboratory, Pune, India
ABSTRACT: Dechlorination is one of the promising methods to convert more toxic chlo-
rinated aromatics into less toxic environmentally friendly value-added products. In this work
catalytic hydrodechlorination is achieved with the aid of low-cost hydrogenation catalysts
such as Raney nickel, bimetallic catalyst and palladium on activated carbon catalyst. Among
them, Raney nickel shows the best result as it takes only a few hours to completely dechlo-
rinate 1,4-dichlorobenzene to benzene. The detailed study on dechlorination using Raney
nickel was made in this work. It is an economically feasible method for treating chlorinated
pollutants in wastewater. Experiments were done with varying parameters like temperature,
concentration, and pH. Dechlorination shows the best result at high temperature and lower
pH. The product was confirmed using HPLC analysis and UV spectroscopy. Recycling of
Raney nickel catalyst was also performed in this study. It was not possible to do more than
two recycles due to poisoning of the catalyst.
1 INTRODUCTION
Chlorinated aromatic compounds have at least one chlorine atom covalently attached to an
aromatic ring. Due to the presence of a halogen group in the aromatic ring, it is highly resist-
ant for biodegradability, and so its ubiquitous presence can be seen in every ecosystem. Chlo-
rinated organic, such as Trichloroethylene (TCE), Carbon Tetrachloride (CT), chlorophenols,
and Polychlorinated Biphenyls (PCBs) are among the most common contaminants. Most of
these chloroorganics were widely used in industry during the past half-century as solvents,
pesticides, and electric fluids.
Many chlorinated organic chemicals (COCs) have been detected in many surface waters
and groundwater, in sewage, and in some biological tissues (Pearson, 1982). The observed lev-
els are, in general, too low to cause immediate acute toxicity to mammals, birds, and aquatic
organisms (Cheng et al., 2007). Treatment processes can and do reduce the concentrations of
COCs in water However, the degree of efficacy is often a function of chemical structure, cost,
and energy. All treatment processes have some degree of side effects, such as a generation of
residuals or by-products. Among the different methods of treatment, catalytic hydrodechlo-
rination is emerging as an effective way to reduce toxicity of COCs. This method reduces the
toxicity and increases the biodegradability at low cost. It is a method of recycling compounds
from which they have originally formed with low emissions.
565
2 EXPERIMENT
Experiments were conducted for studying the effect of initial concentration of 1,4-
dichlorobenzene, catalyst concentration, pH, and synergy of salts for Raney nickel. Dechlo-
rination experiments were conducted with various metal and bimetallic catalysts. A bimetallic
catalyst was prepared using its corresponding metal precursor and sodium borohydride was
used as a reducing agent. All experiments were carried out at room temperature (30 ± 2°C).
Stock solution of 1,000 ppm 1,4-dichlorobenzene was prepared in a 100 mL standard flask
with acetonitrile. The desired concentration of 1,4-dichlorobenzene for experiments was pre-
pared by micro-pipetting from this stock solution into deionized water.
The solutions of 1,4-dichlorobenzen were prepared in the concentrations of 10, 20, 50,
and 60 mg/L. Batch experiments were conducted with 150 mL of solution taken in a 250 mL
conical flask with a tight lid. These bottles were kept in a rotary shaker at a constant shaking
speed of 150 rpm for 24 hrs. After 24 hrs, 10 mL of sample was taken out and syringe fil-
tered. The resulting 1,4-dichlorobenzene concentration was determined using HPLC. From
this final 1,4-dichlorobenzene concentration, the percentage dechlorination was calculated,
where C (mgL−1) is the amount of 1,4-dichlorobenzene per liter at time t., and C0 is the
initial concentration (mg L−1). The dechlorination efficiency or conversion percentage of 1,4-
dichlorobenzene was calculated using the expression:
ln C/C0 = −K*t (2)
where
C: concentration of 1,4-dichlorobenzene in ppm
C0: initial. Concentration of 1,4-dichlorobenzene in ppm
K: reaction rate constant
t:time
After a time interval, solutions were filtered and analyzed for 1,4-dichlorobenzene content
using HPLC.
566
567
568
3.6 Effect of pH
pH is an important parameter in dechlorination because the ground water for treatment
will have different pH, due to different treatment and the presence of several salts and other
chemicals. Most of dechlorination studies show that acidic pH favors the dechlorination.
The experiments had been done with pH 2, 7, and 11 to know whether acidic, neutral or
basic pH is good for dechlorination. pH was changed using 0.1 N HCl and 0.1 N NaOH.
Figure 6 shows that acidic medium favors dechlorination in the first ten minutes as the sup-
ply of hydrogen ions from hydrochloric acid helps the fast formation of benzene, more than
in a basic condition where it is difficult for the replacement of hydrogen ions from water to
the aromatic ring. The result shows that acidic and neutral pH does not show much differ-
ence in dechlorination after four hours of reaction time. However, basic pH does not show
comparatively good results. Therefore, optimization of dechlorination can be done at neutral
pH, which is economically and chemically effective.
569
570
Figures 10 and 11. SEM image of Raney nickel before and after reaction.
571
4 ANALYSIS OF PRODUCTS
Figure 13. UV spectroscopy.
572
Benzene was separately injected into the HPLC column and the peak appeared at 4.5 min.
Afterwards, the reaction sample was analyzed in HPLC. The HPLC analysis after reaction result
is shown in Figure 15. A small peak appears at 4.457 min; it represents the presence of benzene.
5 CONCLUSIONS
Cheng, R.O.N.G., Wang, J. & Zhang, W. (2007). Reductive dechlorination of p-chlorophenol by nano-
scale iron. Biomedical and Environmental Sciences, 20(5), 410–413.
Pearson, C.R. (1982). Halogenated aromatics. In O. Hutzinger (Ed.), Anthropogenic Compounds: Volume
3, Part B (The Handbook of Environmental Chemistry) (pp. 89116). New York: Springer-Verlag.
Sopoušek, J., Pinkas, J., Brož, P. Buršík, J., Vykoukal, V., Škoda, D, & Šimbera, J. (2014). Ag-Cu colloid
synthesis: Bimetallic nanoparticle characterisation and thermal treatment. Journal of Nanomaterials,
2014, 1.
Xia, C., Liu, Y., Xu, J., Yu, J., Qin, W. & Liang, X. (2009). Catalytic hydrodechlorination reactivity of
monochlorophenols in aqueous solutions over palladium/carbon catalyst. Catalysis Communications,
10(5), 456–458.
574
ABSTRACT: In the process industries the control of liquid level is mandatory. But the
control of nonlinear process is difficult. Many process industries use conical tanks because
their nonlinear shape provides better drainage for solid mixtures, slurries and viscous liquids.
Conical tanks are extensively used in the process industries, petrochemical industries, food
process industries and wastewater treatment industries. So, control of conical tank level is
a challenging task due to its nonlinearity and continually varying cross section. This is due
to the relationship between the controlled variable level and manipulated variable flow rate,
which have a square root relationship. The system identification of the nonlinear process is
made and studied using mathematical modeling with Taylor series expansion, and the real
time implementation is done in Simulink using MATLAB.
1 INTRODUCTION
Every industry faces the flow control and level control problem and have plentiful fea-
tures such as nonlinearity, time-delay, and time invariants. These features cause difficulties
in obtaining the exact model. Conical tanks are extensively used in the process indus-
tries, petrochemical industries, food process industries and wastewater treatment indus-
tries. The conical tank is generally nonlinear in nature due to its varying cross-sectional
area. Although many innovative methodologies have been devised in the past 50 years
to handle more complex control problems and to achieve better performances, the great
majority of industrial processes are still controlled by means of simple Proportional-Integral-
Derivative (PID) controllers. This seems to be because PID controllers, despite their simple
structure, assure acceptable performances for a wide range of industrial plants, and their
usage (the tuning of their parameters) is well known among industrial operators. Hence,
PID controllers are simple and easy if the process is linear. Since the process considered is a
nonlinear process, various other techniques are being implemented, which include Internal
Model Control (IMC) and fuzzy logic control. The IMC design procedure is exactly the
same as the open loop control design procedure. In addition, the IMC structure compen-
sates for disturbances and model uncertainty. The filter parameters in IMC are considered
and are used to tune the model of the given system to get the desired output. The use of
fuzzy logic controllers seems to be particularly appropriate, since it allows us to make use
of the operator’s experience and therefore to add some sort of intelligence to the automatic
control. Firstly, a PID controller has been designed by using the Ziegler-Nichols frequency
response method, and its performance has been observed. The Ziegler-Nichols tuned con-
troller parameters are fine-tuned to get satisfactory closed-loop performance. Secondly,
it has been proposed for the same system to use IMC and fuzzy logic controllers. A per-
formance comparison between the PID controller, IMC-based PID controller, and fuzzy
logic controller is presented using MATLAB/Simulink. Simulation results are studied, and
finally the conclusion is presented.
575
The system used is a conical tank, which is highly nonlinear due to the variation in area of
cross section. The controlling variable is inflow of the tank. The controlled variable is the
level of the conical tank. A level sensor is used to sense the level in the process tank and is
fed into the signal conditioning unit. The required signal is used for further processing. The
level process station is used to perform the experiments and to collect the data. One of the
computers is used as a controller. It consists of the software which is used to control the level
process station. The process consists of a process tank, reservoir tank, control valve, I to P
(I/P) converter, level sensor and pneumatic signals from the compressor.
When the setup is switched on, the level sensor senses the actual level. Initially the signal is
converted to a current signal in the range between 4 to 20 mA. This signal is then given to the
computer through a data acquisition cord. Based on the controller parameters and the set-point
value, the computer will take consequent control action and the signal is sent to the I/P converter.
Then the signal is converted to a pressure signal using the I/P converter. The pressure signal acts
on a control valve which controls the inlet flow of water into the tank. A capacitive type level
sensor is used to sense the level from the process and converts it into an electrical signal. Then the
electrical signal is fed to the I/V converter, which in turn produces a corresponding voltage signal
to the computer. The actual water level storage tank sensed by the level transmitter is fed back
to the level controller, and then compared with a desired level to produce the required control
action that will position the level control as needed to maintain the desired level. Now the con-
troller decides the control action. It is first given to the V/I converter and then to I/P converter.
The final control element (pneumatic control valve) is now controlled by the resulting air pres-
sure. This in turn controls the inflow to the conical tank and the level is maintained.
The system specifications (Rajesh et al., 2014) of the tank are as follows:
• Conical tank – Stainless steel body, height 70 cm, top diameter 35 cm, bottom diameter 2.5 cm
• Pump – Centrifugal 800 LPH
• Valve coefficient – K = 2
• Control Valve – Size ¼ pneumatic actuated type: Air to open, input 3–15 psi
• Rota meter range – 0–600 LPH.
576
r R
tanθ = and also tanθ = (1)
h H
1 dh
2
R
Fin − Fout = * A + 2π * h2 (2)
3 dt H
Output flow rate, Fout = K h (3)
dh
= α Fin h−2 − β h−3/2 (4)
dt
1
α = 2 (5)
R
π
H
β = Kα (6)
dhs
At steady state, = αFis hs −2 − βhs −3/2 = 0 (7)
dt
Let y = (h − hs) and U = F − Fis (8)
dy 1
= − βhs−5/ 2 y + α hs−2 U (9)
dt 2
2 −5/2 dy 2α −1/ 2
β hs dt + y = β hs U (10)
dy
τ + y = CU
dt (11)
Y (s) C
= (12)
U (s) τ s + 1
577
1 10
y (s) 3.16
=
U ( s ) 62.08s + 1
2 15
y( s ) 3.87
=
U ( s ) 170.66 s + 1
1 10
y( s ) 0.03097
=
U ( s ) 9.312s 2 + 28.15s + .451
2 15
y( s ) 0.03793
=
U ( s ) 36.942s + 12.16s + 0.451
2
where
2
τ = hs−5/ 2 (13)
β
2α −1/2
C= hs (14)
β
y( s ) 0.038
= (16)
U ( s ) 36.942s 2 +12.16s + 0.451
de(t ) t
u(t ) = K pe(t ) + K d K i ∫ e(τ )dτ (17)
dt 0
578
θ θ
τ+ θ +τ
Kc = 2 Ti = + τ Td = 2
θ (18)
2 θ
K λ+ 2 + τ
2 2
579
Kp Ti Td
580
operators into automatic control. MATLAB/Simulink provides tools to create and edit fuzzy
inference systems within the framework. It is also possible to integrate the fuzzy systems into
simulations with Simulink. These are: fuzzy linguistic variable representing the level error (e);
change of level error (de); and, the output control effort (u), respectively. A Fuzzy Logic System
(FLS) can be defined as the nonlinear mapping of an input data set to a scalar output data.
Firstly, a crisp set of input data is gathered and converted to a fuzzy set using fuzzy linguis-
tic variables, fuzzy linguistic terms and membership functions. This step is known as fuzzi-
fication. Afterwards, an inference is made based on a set of rules. Lastly, the resulting fuzzy
output is mapped to a crisp output using the membership functions, in the defuzzification
step. Mainly Fuzzy Logic Controllers (FLC) are implemented on nonlinear systems which
yield better results. In designing the controller, the number of parameters (Ilyas et al., 2013)
needs to be selected, and then the membership function and rules are selected based on heu-
ristic knowledge.
The response of IMC and fuzzy controllers are compared with convectional a PID
controller.
The performance is studied by evaluating Rise Time, Settling Time, Integral Square Error,
Integral Absolute Error and Integral Time Absolute Error.
5 CONCLUSION
The controlling of nonlinear process is a challenging task and nonlinearity of the conical
tank is analyzed. Modeling of transfer function of the system is done by using system iden-
tification. Various controllers are simulated in MATLAB/Simulink. An open loop step test
method is used to find the proportional gain, delay time and dead time. Here Taylor series
approximation is used for the nonlinear approximation, because of its accuracy compared to
other nonlinear approximation techniques. The simulation results show that the IMC-based
PID controllers have minimum settling time and rise time in order to reach steady state value,
as compared to the conventional controller. After analyzing simulated response of models,
the fuzzy controller is found to be the more excellent controller than the IMC-based PID and
PID controllers.
REFERENCES
Fathima, M.S., Banu, A.N. Nisha, A. & Ramachandran, S. (2015). Comparison of controllers for a flow
process in a conical tank. International Journal, 1, 145–148.
Ilyas, A., Jahan, S. & Ayyub, M. (2013). Tuning of conventional PID and fuzzy logic controller using
different defuzzification techniques. International Journal of Scientific & Technology Research, 2(1),
138–142.
581
582
ABSTRACT: This study investigates the synthesis of multi walled nanotubes by the decom-
position of acetylene over misch metal catalyst in a chemical vapour deposition reactor. The
synthesized CNTs were analyzed by different spectroscopic techniques. Also, a kinetic model
for MWNT growth is proposed to investigate the dependence of the flow rate of precursor
on CNT production rate. The model is validated by comparing its predictions with a set of
experimental measurements and is simulated in MATLAB software. The experimental results
were found to agree well with the theoretical predictions obtained from the model. In addi-
tion to the synthesis and modeling of CNTs, this work also embodies a technique for conduc-
tive coating using multi walled carbon nanotubes.
1 INTRODUCTION
Carbon nanotubes attracted a lot of researchers from academia to industry because of their
remarkable mechanical and electronic properties, viz. high thermal and electrical conductiv-
ity, high aspect ratio, high tensile strength and low density, when compared with conventional
materials. And also finds promising applications in many fields such as field and light emis-
sion, biomedical systems, nanoelectronic devices, nanoprobes, nanosensors, Conductive com-
posites and energy storage. The CNTs are either single walled carbon nanotubes (SWNTs)
or multi walled carbon nanotubes (MWNTs). The biggest challenge in developing potential
applications for CNTs lies in the production of pure CNTs at affordable prices. The commonly
used CNT synthesis techniques are Arc discharge, Laser ablation, Chemical vapor deposition
(CVD), Electrolysis, Flame synthesis, etc. Among these methods, Chemical vapor deposition
is considered as cheap, simple and most promising way for large-scale synthesis of CNTs [1–3].
The CNT deposition profiles inside a CVD reactor strongly depend on various parameters
such as reaction temperatures, feed gas flow rates, carrier gas flow rates, catalyst type, etc. These
reaction conditions can vary throughout the reactor, affecting the yield as well as rate of the
reaction. Therefore it is very important to develop a model of the system as an important aid
in studying the CNT growth process since it can envisage the yield without undertaking expen-
sive experimental studies. Also helps in optimizing the process thereby enhancing CNT scale
up process. This work is solely dedicated to the formation of MWNTs via the decomposition
of acetylene over misch metal catalyst and its characterization. Also, aims to the modeling of
CNT synthesis process and finally the development of conductive coating using MWNTs.
Multi walled nanotubes are synthesized by the decomposition of acetylene over an alloy
of misch metal catalyst powder in a CVD reactor. The CVD reactor consist of a tubular
583
3 KINETIC MODELING
Catalytic graphitisation of carbon was used to explain the synthesis of multi walled carbon
nanotubes from acetylene using the catalytic chemical vapour deposition method.
Catalytic graphitisation involves carbon dissolution, adsorption and reaction to produce
CNTs. Equation (1) presents the mechanisms of catalytic graphitisation of acetylene to CNTs
using the CVD technique, where acetylene and possibly the cracked fractions under heat are dis-
sociated into carbon atoms. The carbon atoms are deposited and adsorbed on the catalyst sur-
face, which in turn reacted with each other to form C-C bonds to produce the carbon nanotubes.
Mm
2C 2H2 4C + 2H2
→0
750 C
(1)
nC ↔ C n − Mm → m [ CNTs ] + Mm
At temperature less than 750°C, no chemical reaction or no CNTs production was observed,
but as the temperature reaches 750°C, production of CNTs was occur which indicating the
complete decomposition of acetylene. Therefore, at this temperature, dissolution process was
assumed to become non rate limiting, hence the rate of catalytic graphitisation formation and
acetylene consumption becomes equal.
584
The catalytic graphitization of acetylene to CNT is normally the case with solid catalyzed
reaction that can be expressed by the rates of reaction catalyzed by solid surfaces per unit
mass as,
1 dN C
− rc2 H = = kθ c n (4)
2
WMm dt
Here, Langmuir Hinshelwood mechanism is adopted to obtain the reaction rate and equi-
librium constants.
kKC n
−rc2 H = (5)
2
1 + KC n
1 1 1
= n
+ (6)
rc2 H kKC k
2
Equation (6) is used to determine the kinetics parameters used in computing the model.
The model which represents the production CNT by CVD is obtained as,
kKCAn exp (1 − θ )
r= C (7)
1+ K k1t
The high electrical conductivity and low density of CNTs makes them a suitable material
for coating applications. Here, the coating is to be developed particularly for cryogenic tank
exteriors. Currently a PU base gray conductive coating is used for the purpose which is of
higher density. CNTs, because of its innate low density are expected to perform much better
than this conventional strategy.
The experimental procedure for the development of conductive coating involves two steps;
Substrate preparation and CNT dispersion in solvents. In Substrate preparation, PU foam
of 10 × 10 × 2 cm was taken. Two coatings were applied over the foam; first one being the PU
coating as VBC to prevent any moisture permeation into the foam from the outside atmos-
phere and second one, the CNT conductive coating. Electrically conductive coatings were
prepared by dispersing CNT in solvent. Before applying the second coating, the CNTs were
totally dispersed in either acetone or toluene by sonication for 3 hours. The dispersed solu-
tion is then applied on the substrate, weighed and dried for 15 minutes to obtain a uniformly
coated conductive layer on the PU substrate. The conductivity of the coating was measured
by means of surface resistivity meter. The procedure was repeated until the expected conduc-
tivity (Conductivity of MWNT, 10−3 S/m) was obtained.
585
Figure 2 is the HRTEM image of MWNTs which shows that MWNTs have a hollow
structure. And image reveals the diameter of the CNTs. It shows that MWNT has an inner
diameter of 12.5 nm and outer diameter of 38 nm.
Fig. 3 shows the Raman spectra of MWNTs, from which Multiwall structure of CNT
was identified. The peaks present at 1577.96 cm-1, 1347.1 cm-1 and 2693.3 cm-1 represents
the G, D and G’ modes in Raman spectra. The presence of CNT is identified by a G line at
1577.96 cm-1. The RBM mode was not there. Hence, the multi walled structure of CNT was
confirmed. The D band corresponds to 1347 cm-1 is related defects of graphitic sheets or
carbonaceous particle at the surface of the tube.
Figure 4 is a plot for the weight loss in % vs. the oxidation temperature, measured by heat-
ing up the MWNTs in a TGA. The weight loss curve between 100 and 800°C was plotted by
adjusting about 100% for the weight loss at 800°C, in which the actual weight was presumably
the weight of catalyst (usually 10% of total weight). Weight losses below 200°C and From
the TGA plot, it can be seen that there was no significant weight loss up to 300°C. After this
temperature, a slight decrease in the weight can see due to the burning of amorphous carbon
until the temperature reaches 500°C. While reaching 500°C, a sharp decrease in the weight
can be seen due to the burning of MWNTs. On reaching 650°C, all the MWNTs have burned.
There was no residual weight percentage at 650°C which implies that the MWNTs produced
at 750°C were 100% pure.
Fig. 4 shows the FTIR image of MWNTs synthesized over misch metal catalyst at 750°C.
The wave number 3410 cm-1, 1726 cm-1 and 1594 cm-1 represents the O-H stretches of the
terminal carboxyl group, the carboxyl C = O groups and the C = C stretching respectively.
From this it was clear that by the acid treatment of MWNTs improves its interfacial interac-
tion. Hence it can be used to make matrix structures.
rier gas flow and the results obtained are presented in Figure 6(a). A kinetic model equation
was developed to predict the production rate of MWNTs as
The modeled equation predicts the rate of production of CNTs at various acetylene con-
centrations. The results obtained were shown in Table 1.
It is evident that the plots are comparable, even though they are not exact similar. This may
due to the assumption that the carbon atoms are occupied on the entire surface of the catalyst.
Hence, the value of fraction of surface area occupied by the carbon atom (θ) is taken as 1.
It is expected that a more accurate value of θ may yield better results.
6 CONCLUSION
Chemical vapor deposition based production of carbon nanotubes yields good quality, uni-
formly and well aligned nanostructures. In this work multi walled nanotubes (87.06%) are
synthesized by chemical vapor deposition of acetylene over misch metal catalyst at 750°C.
The CNTs were characterized by SEM, HRTEM, Raman spectroscopy, TGA and FTIR.
From Raman spectra and HRTEM, the presence of CNTs was identified and confirmed.
FTIR analysis revealed that the CNTs are functionalized during acid treatment. SEM image
gives an idea about the morphology and structure of MWNTs. The TGA results show that
the CNT synthesized at 7500C was almost 100% pure.
In the second section of the work, a kinetic model was developed, for the MWNT synthe-
sis by CVD, to study the effect of flow rate of acetylene on the CNT production rate. The
model equation was based on the experimental data. The theoretical prediction from the
model equation and experimental data are comparable. Here the maximum yield obtained is
0.374 mg/sec at acetylene concentration of 4287.32 ppm.
Synthesized MWNTs are used to develop a CNT based conductive coating (conductivity:
10–4 S/m) which is a suitable coating for cryogenic tank insulation. The obtained coating
587
Figure 6. (a) Effect of flow rate of acetylene on CNTs production rate (b) Computed results of CNTs
production rate.
Figure 7. Conductive coating (Conductivity: 10–4 or 10–5 S/m, Area: 0.01 m2).
588
ACKNOWLEDGEMENT
The satisfaction and euphoria on the successful completion of any task would be incom-
plete without mentioning the people who made it possible whose constant guidance and
encouragement crowned out effort with success. I express my heartfelt thanks to Sushree-
sangita dash (External guide), Manoj N (Internal guide), V.O. Rejini (HOD), S.K. Manu
(Dy. Manager, PFC, VSSC), Sriram P. (Engineer SC), all other staffs of VSSC, all the fac-
ulty members of the department of Chemical Engineering, my friends and my family.
NOTATIONS
REFERENCES
[1] Andrea Szabó, CaterinaPerri, Anita Csató, Girolamo Giordano, DaniloVuono and János B. Nagy,
“Synthesis Methods of Carbon Nanotubes and Related Materials”, Materials 2010, 3, 3092–3140.
[2] KalpanaAwasthi, Anchal Srivastava and O.N. Srivastava, “Synthesis of carbon nanotubes ”, Physics
Department, Banaras Hindu University, Varanasi-221 005, India.
[3] Adedeji E. AgboolaRalph W. Pike T.A. Hertwig Helen H. Lou, “Conceptual design of carbon nano-
tube processes”, Clean Techn Environ Policy (2007) 9:289–311.
[4] KochandraRaji and Choondal B. Sobhan, “Simulation and modeling of carbon nanotube synthesis:
current trends and investigations”, Nanotechnolgy Rev 2013; 2(1): 73–105.
[5] Sunny EsayegbemuIyuke, SakaAmbaliAbdulkareem, Samuel Ayo Afolabi, and Christo H.
vZPiennar, “Catalytic Production of Carbon Nanotubes in a Swirled Fluid Chemical Vapour Depo-
sition Reactor”, International Journal of Chemical Reactor Engineering, Volume 5, 2007, Note S5.
[6] K. Raji, Shijo Thomas, C.B. Sobhan, “A chemical kinetic model for chemical vapor deposition of
carbon nanotubes”, Applied Surface Science 257 (2011) 10562–10570.
[7] O. Levenspiel, “Chemical Reaction Engineering” third ed., Wiley India Pvt. Ltd., 2006.
[8] M.N. Masri, Z. M. Yunus, A.R.M Warikhand A.A Mohamad (2010), “Electrical conductivity and
corrosion protection properties of conductive paint coatings”, anticorrosion methods and materials,
vol 57, issue 4, pp. 204–208 (2010).
589
ABSTRACT: In the present study the biosorption of nickel onto sargassum tenerrimum pow-
der (brown algae) from an aqueous solution was studied. The equilibrium study was carried
out for parameters: agitation time (1–210 min) (t), biosorbent size (45–300 µm) (dp), biosorbent
dosage (2–24 g/L) (w), pH of aqueous solution (1–8), initial concentration of nickel in aqueous
solution (5–150 mg/L) (C0), and temperature (283–323°K) of aqueous solution on biosorption
of the metal (nickel) were studied. In the present investigation the equilibrium data was well
explained by Langmuir, Temkin, and Redlich and Peterson with a correlation coefficient of
0.99, and followed by a Freundlich isotherm. The kinetic studies reveal that the biosorption
system obeyed the pseudo second order kinetic model by considering the correlation coefficient
value as 0.99. From the values of ∆S, ∆H and ∆G it is observed that the biosorption of nickel
onto sargassum tenerrimum powder was irreversible, endothermic and spontaneous.
1 INTRODUCTION
All living organisms require heavy metals in low concentrations, but high concentrations of
heavy metals are toxic and can cause cancer (Koedrith et al., 2013). Nowadays the environ-
ment is threatened by an increase in heavy metals. Therefore, in recent years the removal of
heavy metals has become an important issue (Nourbakhsh et al., 2002). Methods for remov-
ing metal ions from aqueous solution mainly consist of physical, chemical and biological
technologies. Conventional methods for the removal of heavy metal ions from wastewater,
such as chemical precipitation, flocculation, membrane filtration, ion exchange, and electro-
dialysis electrolysis, are often costly or ineffective for the treatment of low concentrations of
pollutants (Wang & Chen, 2009). Biological uptake is a promising approach that has been
studied in the past decade. This process is a good candidate for replacing old methods (Pinto
et al., 2011). High efficiency, removal of all metals even at low concentrations, being economi-
cal, and energy independence are the main advantages of biological uptake which present
this process as being a viable new technology (Bai & Abraham, 2002). Biosorption is used to
describe the passive non-metabolically mediated process of metal binding to living or dead
biomass (Rangsayatorn et al., 2002). Water pollution by heavy metals is globally recognized
as being an increasing environmental problem since the start of the Industrial Revolution
in the 18th century (Dàvila-Guzmàn et al., 2011). Heavy metals may come from different
sources such as electroplating, textile, smelting, mining, glass and ceramic industries as well
as storage batteries, metal finishing, petroleum, fertilizer, pulp and paper industries. Nickel
is one of the industrial pollutants, possibly entering into the ecosystem through soil, air, and
water. Nickel is a toxic heavy metal found in the environment, as a result of various natu-
ral and industrial activities. The higher concentration of the nickel causes poisoning effects
like headache, dizziness, nausea, tightness of the chest, dry cough and extreme weakness
591
2.3 Procedure
The procedures adopted to evaluate the effects of various parameters viz. agitation time (t),
biosorbent size (dp), biosorbent dosage (w), pH of aqueous solution, initial concentration of
nickel in aqueous solution (C0), and temperature of aqueous solution, on the biosorption of
metal (nickel) are explained below.
50 mL of aqueous solution containing 20 mg/L of initial concentration of nickel was
taken in a 250 mL conical flask. 10 g/L of 45 µm size biosorbent was added to the flask.
The conical flask was then kept on an orbital shaker at room temperature (30°C) and was
shaken for one min. Similarly, 21 more samples were prepared in conical flasks by adding
10 g/L of biosorbent and agitating for different time periods from 2 to 210 min. For the
resulting agitation equilibrium time of 120 min the further experiments were repeated for
varying biosorbent sizes viz. 75, 150 and 300 µm. The resulting optimum biosorbent size
was 45 µm. The above procedure was repeated for different adsorbent dosages of 4, 6, 8,
10, 12, 14, 16, 18, 20, 22 and 24 g/L at equilibrium agitation time (120 min) and biosorbent
size is optimum (45 µm). The equilibrium biosorbent dosage was found to be 18 g/L. To
determine the effect of pH on nickel biosorption, 50 mL of aqueous solution was taken
in each of 12 conical flasks. The pH values of aqueous solutions were adjusted to 1, 2, 3,
3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7 and 8 in separate 250 mL conical flasks. 18 g/L of 45 µm size
biosorbent was added to each of the conical flasks. The influence of initial concentration
on biosorption of nickel was determined as follows: 50 mL of aqueous solutions, each of
different nickel concentrations of 5, 10, 20, 30, 40, 50, 60, 75, 100, 125 and 150 mg/L were
taken in 11 250 mL conical flasks. 18 g/L of 45 µm size biosorbent was added to each of
the conical flasks. The flasks were agitated on an orbital shaker for equilibrium agitation
time at room temperature. The samples were allowed to settle and then filtered separately.
The samples thus obtained were analyzed in atomic absorption spectroscopy (AAS) for the
final concentrations of nickel in aqueous solutions.
592
593
3.4 Effect of pH
The pH of aqueous solution is drawn against % biosorption of nickel in Figure 4. The %
biosorption of nickel is increased from 89.65% (0.996 mg/g) to 97% (1.077 mg/g) as pH is
increased from 1 to 4.5 and decreased beyond pH value of 4.5. In the case of lower pH val-
ues, the occupation of the negative sites of the biosorbent by H+ ions leads to a reduction
of vacancies for nickel ion and consequently causes a decrease in nickel ion biosorption.
As the pH is raised, the ability of the nickel ions to compete with H+ ions also increases.
Although the sorption of nickel ions is raised by a growing pH, a further increment of pH
causes a decline in biosorption due to the precipitation of nickel hydroxides. The predomi-
nant adsorbing forms of nickel are nickel and NiOH+, which occur in the pH range of 4–6.
The value of pH at 4.5 is considered as being optimum for the study of other parameters. The
functional groups like aliphatic C-H, SO3 stretching, C-O and C = O stretching, aromatic—
CH stretching and amine groups of the biosorbent were responsible for nickel biosorption.
Similar results were reported for biosorption of nickel by waste pomace of olive oil factory
(Nuhoglu & Malkoc, 2009). The optimum pH ranging from 4 to 5 was reported by Aksu
et al. (2006), Özer et al. (2008) and Congeevaram et al. (2007) for the biosorption of nickel by
using various biosorbents like dried chlorella vulgaris, enteromorpha prolifera and aspergillus
species respectively.
594
595
4 ADSORPTION ISOTHERMS
A Freundlich isotherm is drawn between log qe and log Ce, in Figure 7 for the present data.
The equation obtained is:
with a correlation coefficient of 0.97. The Freundlich constant (Kf) is found to be 1.054 and
the n-value of 0.381 lies between 0 and 1, indicating the applicability of the Freundlich iso-
therm to the experimental data.
596
qe
qm = bCe / (1 + bCe ) (3)
where
Ce is the equilibrium concentration (mg/L),
Qe is the amount of nickel adsorbed (mg/g).
Equation 3 is rearranged as:
Ce 1
qe = bqm + (1/ qm )Ce (4)
The Langmuir isotherm (Figure 8) for the present data is represented as:
Ce
qe = 0219Ce + 0.920 (5)
with a good linearity (correlation coefficient, R2 = 0.98) indicating a strong binding of nickel
ions to the surface of sargassum tenerrimum powder. The qm and b values are 4.566 mg/g and
0.98 respectively. The value of separation factor is 0.508 and it indicates favorable biosorp-
tion (0 < RL < 1) of nickel onto sargassum tenerrimum powder.
597
where
R = Universal gas constant (8.314 J/mol.K)
T = Temperature of dye solution, K
AT, bT = Temkin isotherm constants
AC e
qe = (10)
1 + BC e g
where A (L/g) and B (L/ mg) are the Redlich-Peterson isotherm constants and ‘g’ is the
Redlich-Peterson isotherm exponent, which lies between 0 and 1.
The linear form of the equation is:
C
ln A e − 1 = g ln(C e ) + ln B (11)
qe
Although a linear analysis is not possible for a three-parameter isotherm, the three iso-
therm constants – A, B and g – can be evaluated from the pseudo linear plot using a trial and
error optimization method. A general trial and error procedure is applied to determine the
598
coefficient of determination (R2) for a series of values of ‘A’ for the linear regression of ln (Ce)
on ln [A(Ce/qe)–1] and to obtain the best value of ‘A’ with maximum ‘R2’. Figure 10 shows the
Redlich-Peterson plot drawn between ln [A(Ce/qe)–1] and ln Ce. For the present experimental
data, the equation obtained is:
Ce
ln − 1 = 1.0718 lnCe − 1.685 (12)
qe
5 BIOSORPTION KINETICS
The plotting of log (qe–qt) versus ‘t’ gives a straight line for first order kinetics, facilitating
the computation of adsorption rate constant (K1). If the experimental results do not follow
the above Equation, in such cases the pseudo second order kinetic equation:
In the present study, the Lagergren plot of log (qe–qt) vs. ‘t’ is shown in Figure 11. The
pseudo second order rate equation plot between (t/qt) and ‘t’ is drawn in Figure 12. The
resulting equations and constants are shown in Table 1.
600
–(ΔG), kJ/mol
C0, ΔS, ΔH,
mg/L J/(mol–°K) J/mol 283 K 293 K 303 K 313 K 323 K
As the correlation coefficient for the pseudo second order kinetics is 0.999, it describes
the mechanism of nickel–sargassum tenerrimum powder interactions better than first order
kinetics (R2 = 0.93).
6 CONCLUSION
The equilibrium agitation time for biosorption of nickel is 120 min. The optimum dosage is
18 g/L. % biosorption is increased up to pH = 4.5. The experimental data is well represented
by the Langmuir isotherm with a higher correlation coefficient (R2 = 0.98). The biosorption
of nickel is better described by pseudo second order kinetics (K2 = 0.226 g/(mg-min)). The
biosorption is endothermic as ∆H is positive, irreversible as ∆S is positive, and spontaneous
as ∆G is negative.
601
Aksu, Z. & Donmez, G. (2006). Binary biosorption of cadmium (II) and nickel (II) onto dried Chlo-
rella vulgaris: Co-Ion effect on mono-component isotherm parameters. Process Biochemistry, 41(4),
860–868.
Al-Rub, F.A., El-Naas, M.H., Benyahia, F. & Ashour, I. (2004). Biosorption of nickel on blank alginate
beads, free and immobilized algal cells. Process Biochemistry, 39(11), 1767–1773.
Bai, R.S. & Abraham, T.E. (2002). Studies on enhancement of Cr (VI) biosorption by chemically modi-
fied biomass of Rhizopus nigricans. Water Research, 36(5), 1224–1236.
Congeevaram, S., Dhanarani, S., Park, J., Dexilin, M. & Thamaraiselvi, K. (2007). Biosorption of chro-
mium and nickel by heavy metal resistant fungal and bacterial isolates. Journal of Hazardous Materi-
als, 146(1–2), 270–277.
Dàvila-Guzmàn, N., Cerino-Cordova, F., Rangel-Méndez, J. & Diaz-Flores, P. (2011). Biosorption of
lead by spent coffee ground: Kinetic and isotherm studies. In AIChE Annual Meeting, Conference
Proceedings, 1–9.
Freundlich, H. (1907). Über die Adsorption in Lösungen (Adsorption in solutions). Z. Physiol. Chem.,
57(1), 384–470.
Gupta, V.K., Rastogi, A. & Nayak, A. (2010). Biosorption of nickel onto treated alga (Oedogonium
hatei): Application of isotherm and kinetic models. Journal of Colloid and Interface Science, 342(2),
533–539.
King, P., Rakesh, N., Beenalahari, S., Kumar, Y.P. & Prasad, V.S.R.K. (2007). Removal of lead from
aqueous solution using Syzygium cumini L.: Equilibrium and kinetic studies. Journal of Hazardous
Materials, 142(1–2), 340–347.
Koedrith, P., Kim, H., Weon, J.I. & Seo, Y.R. (2013). Toxicogenomic approaches for understanding
molecular mechanisms of heavy metal mutagenicity and carcinogenicity. Int J Hyg Environ Health,
216(5), 587–598.
Krishna, R.H. & Swamy, A.V.V.S. (2011). Studies on the removal of Ni (II) from the aqueous solutions
using powder of mosambi fruit peelings as a low cost adsorbent. Chemical Sciences Journal, 2011,
1–13.
Langmuir, I. (1918). The adsorption of gases on plane surfaces of glass, mica and platinum. J. Am.
Chem. Soc., 40(9), 1361–1403.
Mata, Y.N., Blazquez, M.L., Ballester, A., Gonzalez, F. & Munoz, J.A. (2008). Characterization of the
biosorption of cadmium, lead and copper with the brown alga Fucus vesiculosus, Journal of Hazard-
ous Materials, 158(2–3), 316–323.
Nourbakhsh, M.N., Kiliçarslan, S., Ilhan, S. & Ozdag, H. (2002). Biosorption of Cr6+, Pb2+ and Cu2+
ions in industrial waste water on Bacillus sp. Chem Eng J, 85(2–3), 351–355.
Nuhoglu, Y. & Malkoc, E. (2009). Thermodynamic and kinetic studies for environmentally friendly Ni
(II) biosorption using waste pomace of olive oil factory. Bioresource Technology, 100(8), 2375–2380.
Özer, A., Gürbüz, G., Çalimli, A. & Körbahti, B.K. (2008). Investigation of nickel (II) biosorption on
Enteromorpha prolifera: Optimization using response surface analysis. Journal of Hazardous Materi-
als, 152(2), 778–788.
Pahlavanzadeh, H., Keshtkar, A.R., Safdari, J. & Abadi, Z. (2010). Biosorption of nickel (II) from aque-
ous solution by brown algae: Equilibrium, dynamic and thermodynamic studies. Journal of Hazard-
ous Materials, 175(1–3), 304–310.
Pinto, P.X., Al-Abed, S.R. & Reisman, D.J. (2011). Biosorption of heavy metals from mining influenced
water onto chitin products. Chem Eng J, 166(3), 1002–1009.
Rangsayatorn, N., Upatham, E.S., Kruatrachue, M., Pokethitiyook, P., & Lanza, G.R. (2002). Phytore-
mediation potential of Spirulina (Arthrospira) platensis: Biosorption and toxicity studies of cad-
mium. Environmental Pollution, 119(1), 45–53.
Redlich, O.J.D.L. & Peterson, D.L. (1959). A useful adsorption isotherm. J Phys Chem, 63(6), 1024.
Wang, J. & Chen, C. (2009). Biosorbents for heavy metals removal and their future. Biotechnol. Adv.,
27(2), 195–226.
602
Lydia Jenifer
Yokogawa India Ltd., Bangalore, India
ABSTRACT: Integrated Gasification Combined Cycle (IGCC) power plants which are
working on high-efficiency coal gasification technologies, are operated commercially or semi
commercially worldwide. Various coal gasification technologies are embodied in these plants
including different coal feed systems (dry or slurry), fireproof interiors walls (fire brick or
water-cooled tubes), oxidants (oxygen or air), and other factors. These designs which are
several decades old, but using new systems and cycles are emerging to further improve the
efficiency of the coal gasification process. The development of Distributed Control System
(DCS) for automated operation, monitoring and control functions which combines Human
Machine Interface (HMI), interlocks, logic solvers, historian, common database, report gen-
eration, alarm management and a common engineering suite into a single automated system.
The implementation is done using automatic methods of distribution which guarantee the
preservation of behavior of the whole system. The purpose of the project is to develop a suit-
able control strategy using DCS for the futuristic power plant for better monitoring, opera-
tion and availability.
1 INTRODUCTION
603
IGCC has potentially many advantages including high thermal efficiency, good environmen-
tal characteristics, reduced water consumption etc. Hence Gasification based Power plants
are future of power production from coal or biomass. The aim of the project is to develop a
Distributed Control System for monitoring and controlling of an IGCC power plant which
will definitely play a significant role in future. The fuel Syngas is generated using a high effi-
ciency power generation technology which gasifies coal and combustion takes place in high
efficiency gas turbine (GT) in IGCC power plant. Compared with conventional pulverised
coal (PC) fired power plants IGCC has potentially many advantages including:
High thermal efficiency: The Shell gasifier efficiency of IGCC generation is estimated
to be 46–47% net, low heating value (LHV) basis (44–45% net, high heating value (HHV)
basis), for FB class gas turbine using bituminous coal. The highest reported efficiency
for an IGCC is Efficiency of 41.8% HHV basis. Good environmental characteristics that
match or exceed the latest PC plants. The plant’s high thermal efficiency means that emis-
sions of CO2 are low per unit of generated power. In addition, emissions of SOx and
particulates are reduced by the requirement to deep clean the syngas before firing in the
gas turbine.
Reduced water consumption: IGCC uses less water. Since 60% of its power is derived
from an air based Bray ton cycle reducing the heat load on the steam turbine condenser to
only 40% of that of an equivalent rated pulverised coal fired plant. Additionally, through the
direct de-sulfurization of the gas, IGCC does not require a large flue gas de-sulfurization unit
which consumes large amounts of water, thereby reducing water consumption in comparison
with a conventional pulverised coal fired power plant. Further gains in reducing water use
can be achieved when CCS is incorporated into the plant.
604
This approach requires a great deal of input from the various design engineers before all of
the details have been worked out. It recommends the members of the design team to consider
all of the problems involving successful instrumentation operation. The limitation to the
detailed P&IDs design before detailed layout is complete is that the P&IDs must synchronise
with requirements in the electrical, process, instrumentation and piping as closely as possible.
If there are major modifications to be implemented to the project during detailed process
and instrumentation diagram (P&ID) development, the P&IDs must be modified. If the
P&IDs does not sync with the work of the various departments, design team members may
use incorrect information.
The second approach is to allows the P&IDs to show the instrumentation connections
only. Instrumentation designers and engineers (and possibly the electrical engineers to dou-
ble check instrumentation wiring requirements) use this approach when P&IDs are only
used among them. These diagrams do not detail the same level of information as the first
approach. The intention is to show how instrumentation and the process are related which
possibly show the electrical requirements.
One of the most popular air separation process used is cryogenic air separation, frequently
in medium to large scale plants. This technology is mostly preferred for producing nitrogen,
oxygen, and argon as gases and/or liquid products and supposed to be the most cost effective
for high production rate plants. The process of cryogenic air separation is studied and P&ID
is developed as shown in Figure 2.
Similarly the IGCC is divided into several sub sections based on operations and P&ID
for each section is developed separately. The control loops are identified and developed are
shown in Figure 3.
The Coal Grinding System provides a means to prepare the coal as a slurry feed for the
gasifier. Coal is continuously fed to the Coal Weigh Feeder, which regulates and weighs the
coal fed to the Grinding Mill. The unloading of coal, its crushing, storage and filling of
boiler bunkers in a thermal power station is covered by The coal handling plant (CHP). The
main function of coal handling unit is crushing of coal into very fine particles for gasifica-
tion and regulate the ratio of the coal and lime mix with the requirements. The lime ratio is
increased to decrease the SOx level in the syngas produced from gasification chamber.
The process of gasification to produce combustible gas also known as syngas or producer
gas from organic feeds is used. The gasification of Biomass is a thermo-chemical process that
produces relatively clean and combustible gas through pyrolytic and reforming reactions. The
product of gasification is a combustible synthesis gas, or syngas. Because gasification involves
the partial, rather than complete, oxidization of the feed, gasification processes operate in an
605
oxygen-lean environment. The ratio of the combustible hydrogen (H2), methane (CH4), car-
bon monoxide (CO) and moisture determines the heating value of the obtained fuel.
The systems used to remove some particulates and/or gases from industrial exhaust
streams are known as Scrubber system which is a group of air pollution control devices. Gas
is contaminated and has high temperature (500–800°C) as it leaves atmospheric fluid gasifier
which focuses mostly on tar elimination and dust removal. Water or specific organic liquid
or both may be applied in tar elimination from gas. Boiling point (volatility), availability, and
price of organic liquid are major criteria for selection of a proper material. From gasifier,
606
The objective of this paper is to develop a DCS for IGCC Power Plant. Initially the com-
plete processes of Integrated Gasification Combined Cycle (IGCC) Power plant is studied and
found out the requirements for controlling the processes and the hardware used for implement-
ing the control system for different parameters like pressure, temperature, flow, level etc. The
whole power plant consists of several sections of which each section is studied and P&ID for
the process is developed. The P&ID is a specialized document shown on a side view represen-
tation of all equipment. Integration and properly designed interfacing between the DCS and
other digital control packages is essential. The serial links should be made redundant to ensure
the maximum operating continuity. The system bus and the input/output (I/O) buses are also
implemented to be redundant, for guaranteeing the maximum uptime. The sequence of events
607
608
6 CONCLUSION
The proposed control system using Yokogawa CENTUM VP and procedure presents an effec-
tive approach for monitoring and control. In real time the implementation of this method is
very effective and system is available almost 99% time. The design of P&ID of the system
developed by carefully studying the operations and plant processes of IGCC Power Plant.
The present study has demonstrated the design of DCS by I/O station inputs from the
different equipments. The quantities are measured and controlled, and control valves of the
processes are manipulated in real time to implement temperature, pressure, level and flow
rate control, breakdowns are detected and the system is maintained. Live measured values
and status indications reveal the current situation. Process operators monitor and control the
long-distance processes from the console.
ACKNOWLEDGMENT
This work is mainly carried out at Yokogawa India Ltd. Bangalore, India. I would like to
thank all co-authors for the important discussions about the work.
REFERENCES
[1] Sher shah Amarkhail, ‘Air Separation’, Development of human resource capacity of Kabul poly-
technic university”.
[2] Monika Kurková-Zdeněk Klika-Petr Martinec-Jaroslava Pěgřimočová, ‘Composition of bitumi-
nous coal in dependence on environment and temperature of alteration’, Kurkova, M. et al. 2003.
[3] Anil Bose, ‘Classification of Coal in India’, Indian Geography.
[4] Jeffery Phillips, ‘Different types of gasification and their integration with gas turbines’, EPRI/
Advanced Coal Generation.
[5] Younes Chhiti, Mohammed Kemiha, ‘Thermal Conversion of Biomass, Pyrolysis and Gasifica-
tion’, The IJES.
[6] Yongseung Yun, Seung Jong Lee and Seok Woo Chung, ‘Considerations for the Design and Opera-
tion of Pilot-Scale Coal Gasifiers’, Institute for Advanced Engineering, Suwon, Republic of Korea.
[7] Marek Balas, Martin Lisy, Zdenek Skala, Jiri Pospisil, ‘Wet scrubber for cleaning of syngas from
biomass gasification’, Indiana Council of Administrators of Special Education.
[8] Andrej Lotrič, ‒ Mihael Sekavčnik ‒ Christian Kunze ‒ Hartmut Spliethoff, ‘Simulation of Water-
Gas Shift Membrane Reactor for Integrated Gasification Combined Cycle Plant with CO2 Cap-
ture’, Strojniški vestnik – Journal of Mechanical Engineering.
[9] J.E. Jamison, B.G. Lipták, A. Rohr, ‘Power Plant Controls: Cogeneration and Combined Cycle’.
[10] Hyungwoong Ahn, Zoe Kapetaki, Pietro Brandani, Stefano Brandani, ‘Process simulation of a
dual-stage Selexol unit for pre-combustion carbon capture at an IGCC power plant’.
[11] Howard Herzog, ‘An Introduction to CO2 Separation and Capture Technologies’, MIT Energy
Laboratory.
[12] Alan Darby, “Hydrocarbon upgrading gasification program’, Alberta Energy Research Institute
Grant.
[13] Steve Fusselman, Alan Darby and Fred Widman, “Advanced gasifier pilot plant concept defini-
tion”, September 2006.
[14] Yongseung Yun, Seung Jong Lee and Seok Woo Chung, ‘Considerations for the Design and Opera-
tion of Pilot-Scale Coal Gasifiers’.
609
610
1 INTRODUCTION
Solution processing has been reported to be a quite efficient method when both polyaniline
and the polymer matrix are ‘compatible’ with each other and ‘soluble’ in a common solvent
(Barra et al., 2002). The conductivity of solution-cast Polyaniline (PANI)/polymer blends
depends upon the ability of the solvent to finely disperse the conducting polymer, and on
the flocculation of the dispersed PANI in the blend (Paul & Pillai, 2002). The deprotona-
tion nature or the stability of the dopant also play a vital role on the overall conductivity.
Most polyaniline blends described in the literature were processed from m-cresol, a solvent
which is both acidic and high boiling point. If the emeraldine base and the host polymer are
co-soluble in an acid dopant, the blend can be obtained as a film in the conducting form by
solution casting.
The conductive PANI/polyamide-11 blend fibers were prepared by wet-spinning technol-
ogy from concentrated sulfuric acid with relatively high electrical conductivity (Zhang et al.,
2001). High strength and high modulus electrically conducting PANI composite fibers were
also reported (Hsu et al., 1999) from PANI/PPD-T (poly (para-phenylenediamine) tereph-
thalic acid) sulfuric acid solutions. Due to the ease of handling and solvent removal, it is
more convenient to use liquid organic acids than sulfuric acid as solvents. Abraham et al.,
(1996) used formic acid as the solvent as well as dopant for a polyaniline–nylon 6 blend sys-
tem. The chemical modification or blending of polyaniline with nylon 6 does not affect the
crystal structure of either polyaniline or nylon 6. This was confirmed by X-ray diffraction.
The maximum conductivity of the films was about 0.2S/cm, corresponding to a weight ratio
of 0.5 (w/w) for PANI and Nylon 6. Formic acid was also used by Anand et al. (2000) for the
preparation of blends of PANI derivatives (Poly (O-Toluidine) (POT), Poly (M-Toluidine)
(PMT)) with Polymethylmethacrylate (PMMA). The blend was precipitated by the addition
of the formic acid solution to water (non-solvent). The thermal stability of the blends was
reported to be greater than that of their respective salts. Zagórska et al. (1999a) studied the
stability against deprotonation of polyaniline/polyamide 6 blends processed from formic acid.
They prepared polyaniline-polyamide 6 blends in two different ways: one with an additional
protonating agent and the other without an additional protonating agent. The blends of
polyaniline and polyamide 6 processed from formic acid were prepared without an additional
611
2 EXPERIMENTAL
2.1 Materials
The monomer aniline was double-distilled under reduced pressure prior to use. Ammonium
persulphate, ammonium hydroxide, formic acid and hydrochloric acid are analytical grade
reagents and used without purification. PMMA was supplied by SUMIPEX, Korea.
3 CHARACTERIZATION
The UV-VIS absorption spectra of the blend films were recorded using a Varian Cary 5E
model UV-VIS near–IR spectrophotometer.
The base form shows two major peaks at 630 and 330 nm, which is the characteristic absorp-
tion spectrum of the base form of the PANI. The peak at 330 nm regions is assigned to the
Figure 1. UV-VIS spectra of the salt form and base form of polyaniline.
613
Figure 4. UV-VIS spectra of formic acid doped polyaniline with plasticized polymethylmethacrylate
blend film, exposed to atmosphere for seven days.
614
5 CONCLUSIONS
Deprotonation studies show that PANI/PMMA blends and PANI-plasticized PMMA blends
processed from formic acid are not environmentally stable. The interaction between the blend
components and compatibility do not impart any resistance to deprotonation. As per the
literature, formic acid doped PANI with nylon 6 blends are immiscible and PANI/PMMA
blend systems are highly compatible. From the results obtained, we could conclude that the
nature of host matrix and miscibility or immiscibility of the blend components do not make
any change in the deprotonating nature of formic acid processed polyaniline blend systems.
PANI/PMMA formic acid system with an additional protonating agent, camphoursulphonic
acid, showed good stability against deprotonation.
REFERENCES
Abraham, D., Bharathi, A. & Subramanyam, S.V. (1996). Highly conducting polymer blend films of
polyaniline and nylon 6 by co-solvation in an organic acid. Polymer, 37(23), 5295–5299.
Anand, J., Palaniappan, S. & Sathyanarayana, D.N. (2000). Solution blending of poly (o- and m-
toluidine) with PMMA in formic acid medium: Spectroscopic, thermal and electrical behaviour. Eur
Polym J, 36(1), 157–163.
Angelopoulos, M., Asturias, G.E., Ermer, S.P., Rey, A., Scherr, E.M., Macdiarmid, G., & Epstein, A.J.
(1988). Polyaniline: Solutions, films and oxidation state. Molecular Crystals and Liquid Crystals,
160(1), 151–163.
Barra, G.M., Levya, M.E., Soares, B.G. & Sens, M. (2002). Solution-cast blends of polyaniline-DBSA
with EVA copolymers. Synth Met, 130(3), 239–245.
Cao, Y., Smith, P. & Heeger, A.J. (1992). Counter-ion induced processibility of conducting polyaniline
and of conducting polyblends of polyaniline in bulk polymers. Synth Met, 48(1), 91–97.
Cao, Y. & Smith, P. (1993). Liquid-Crystalline solutions of electrically conducting polyaniline. Polymer,
34(15), 3139–3143.
Dan, A. & Sengupta, P.K. (2004). Synthesis and characterization of polyaniline prepared in formic acid
medium. J Appl Polym Sci, 91(2), 991–999.
615
616
R. Sujith Kumar
VSSC, Trivandrum, Kerala, India
ABSTRACT: Control valves are usually actuated by pneumatic signals. Electrical actuators
can also be used, which are more accurate and cheap. Laboratory research on a flow control
valve actuated by brushless DC (BLDC) motor has been carried out. The characterization of
a control valve driven by a DC motor has been obtained to determine whether the outlet flow
depends on motor parameters such as the motor speed, stepping angle etc. Laboratory test
for the characterization work is done by using nitrogen as the test fluid. LabVIEW is used for
the data acquisition. Different tests were carried out with varying speeds and from the time
constants obtained, first order systems were designed. The theoretical response curve for first
order systems were generated using MATLAB software and compared with responses of
controllers with conventional pneumatic actuator. The comparison showed electrical actua-
tors were much faster than pneumatic actuators.
1 INTRODUCTION
Control valves are essential components of any piping system. When we use the term valve,
it is manually operated whereas a control valve is one with an actuator that automatically
opens or closes the valve fully or partially to a position dictated by signals transmitted from
the controlling instruments. Based on actuation, control valves can be mainly classified into
quarter turn valves, multi-turn valves and check valves. Quarter turn valve allows only 90o
rotation and it includes ball valves, butterfly valves spherical valves and plug valves. In this
work a ball valve is used for characterization study. A multi-turn valve allows 360o rotation
and it requires 4–5 turns to completely open or close the valve. Gate valves, globe valves and
pinch valves are all multi-turn valves.
Apart from process industries, control valves find lot of other applications like power sta-
tions, rockets & space crafts, automobile systems etc. For such applications, sometimes it is
required to develop non-conventional type of technologies. Control valves are actuated by
using pneumatic, hydraulic and electrical actuators. Most of the industries are using conven-
tional pneumatic signals for valve actuation. Control valve can also be actuated by using DC
motor.
The objective of this work was to find the response time for a ball valve coupled with a
BLDC motor at various speeds so as to characterize the system. Knowledge about BLDC
motor and its drive is essential for the characterization study. The design and implementation
of BLDC motor drive for automotive applications was reported to give reliable results (Park
et al. 2012). Most of the rocket systems use light weight propulsion systems. One of the most
desired technological requirements for an efficient aerospace launch vehicle is the use of light
weight propulsion system. Brushless DC motors whose efficiency can be greater than 90%
are useful for this purpose in this sense (Van Neikerk 2015).
617
618
3 EXPERIMENT
Stored nitrogen gas at a pressure of 150 bar is regulated to 10 bar by using a spring loaded
pressure regulator. It is then passed through a solenoid valve. A pressure transducer is pro-
vided next to the solenoid valve in the flow line to measure the incoming pressure. The gas
then passes through the motorized ball valve. The upstream pressure can be measured by
using a strain gauge type pressure transducer. By using pressure transmitter, the measured
pressure is transmitted to the DAQ system. For DAQ purpose, NI data acquisition unit and
LABVIEW software is used. The details of the methodology adopted for the test and test
sequence are explained below.
619
4 RESULTS
The obtained experiment results were analysed and graphs were plotted. Time constants
obtained and first order systems were designed. Figure 5 shows first order system with dif-
ferent time constants.
Comparative study
A comparison of the results obtained in this work with a previous work using pneumatically
actuated controller (Sanoj et al. 2013) is tabulated as shown in Table 3.
620
Time constant (s) Rise time (s) Settling time (s) Peak time (s)
621
5 CONCLUSION
The characterization of a ball type control valve actuated by brushless DC motor has been
carried out. Different tests were carried out with varying speeds by using an external poten-
tiometer and from the time constants obtained, first order systems were designed. The theo-
retical response curve for first order systems were generated using MATLAB software and
response characteristics such as rise time, peak time, settling time etc were found out from
this output response curve.
A comparative study performed between the motor driven valve and that of a pneumatic
valve has indicated that the response time are much lower for the motorised valve making
the valve responses much faster. Hence a motorised valve can be used for control application
that warrants faster response. Rocket engines used in propulsion systems require much faster
response, so motorized valves are found to be a very good choice for control valve actuation
in piping systems such as those used in the aerospace industries.
REFERENCES
Avila, A., Carvajal, C. & Carlos Cotrino, C. 2014. Characterization of a Butterfly-type Valve, IEEE
transactions, 16(1): 213–218.
Fisher Controls International. 2005. Control Valve Handbook.
Ireneusz, D. & Stanislaw, F. 2014. Characteristics of flow control valve with MSMA actuator, Interna-
tional Carpethian Control Conference, Krakow, Poland.
Li, T. Huang, J. Bai, Y. Quan, L. & Wang, S. 2015. Characteristics of a Piloted Digital Flow Valve Based
on Flow Amplifier, International Conference on Fluid Power and Mechatronics.
Miller, R.W. 1996. Flow Measurement Engineering Handbook. New York: McGraw Hill.
Oriental Motors, Brushless DC Motor and Driver Package, BLH Series Operating Manual.
Park, J.S. Bon-GwanGu, Kim, Jin-Hong, Choi, Jun-Hyuk & Jung, In-Soung. 2012. Development of
BLDC Motor Drive for Automotive Applications, Electrical Systems for Aircraft, Railway and Ship
Propulsion (ESARS).
Sanoj, K.P., Ajeesh, K.N., Ganesh, P. & Sujithkumar, R. 2013. Characterization of Mass Flow Control
System for Liquid Rocket Engine Application, ‘Proceedings of National Conference on Advanced
Trends in Chemical Engineering’, Govt. Engineering College, Thrissur, India.
Van Neikerk, D. 2015. Brushless Direct Current Motor Efficiency Characterization, Electrical Machines &
Power Electronics (ACEMP).
622
B. Sajeenabeevi
Department of Chemical Engineering, Government Engineering College, Kerala, India
C.G. Varma
Department of Live Stock Production Management, Kerala Veterinary and Animal Sciences University,
Kerala, India
ABSTRACT: Anaerobic Digestion (AD) of wastes is one of the best treatment methods
in the arena of waste management. Biogas, the end product of AD, comprises of 40%–75%
CH4 and 25%–55% CO2, with other minor components such as H2S and SO2. Increased
concentration of minor gases will cause the corrosion of the pipe lines which are being used
in bio-energy generation. Hence, the present study is undertaken to develop a low-cost scrub-
bing mechanism for toxic gas removal. A biogas purification system with multi scrubbing
apparatus (Phase I and Phase II) was designed and utilized for the study. The principle of
chemical absorption was employed and the efficiency of the different caustic solutions at
saturated concentration was investigated. Scrubbing at Phase I reported a 5% increase in
methane and a 5.8% removal of CO2. The removal rate of 43% and 37% was observed for H2S
and NH3 respectively. Carbon dioxide was removed at a rate of 34.9% for KOH, followed by
NaOH and Ca(OH)2 at a rate of 34.1% and 33.9% respectively, in a time duration of three
minutes. It has been found that the absorption capacity of caustic solution was dropping
‘within’ a short time period. Hence, it is necessary to replace the caustic solution in order to
uphold the chemical at saturation point.
1 INTRODUCTION
Anaerobic Digestion (AD), popularly known as biogas technology, has gained a lot of
momentum in the arena of waste management because of its ‘dual role’, that is, the conver-
sion of waste to energy and the mitigation of Green House Gas emission from the disposal
of waste. The end product of AD is mainly biogas, which is principally comprised of 40%–
75% CH4 and 25%–55% CO2 with other minor components such as H2S and SO2 [Kadam &
Panwar, 2017]. In India, biogas technology is chiefly employed in the management of manure
and farm waste, produced by the activities of agriculture and its allied sectors. These wastes
are rich in carbon and nitrogen, which is a prerequisite for the bacteria involved in AD. The
C:N ratio of 25–30 is ideal for AD [Sanaei-Moghadam et al., 2014], but due to the practices
adopted there is an alteration in the C:N ratio to be either too low or too high. This in turn
will have an effect on the metabolism of anaerobic bacteria, finally affecting the composition
of the biogas with an increased concentration of undesirable gases such as H2S, CO2 and
NH3 [Scano et al. 2014].
An increased concentration of these gases is not suitable for the combustion sys-
tems. Carbon dioxide will decrease the calorific value of the biogas because of its non-
combustible nature. It is non-toxic, but other gases present such as H2S are toxic when
623
2.1 Theory
Chemical absorption is an adequate technology for the removal of CO2, NH3 and H2S from
the biogas. In a scrubber system, effluvium from biogas is transferred from gas to liquid as a
part of the reaction [Privalova et al., 2013]. Amines, caustic solvent and amino acid salt solu-
tions are the various chemicals which are used in biogas-purification [Abdeen et al., 2016], but
caustic solvent is mostly chosen for the purpose of cost efficiency. All aqueous solution for
chemical scrubbing was prepared at St Thomas College chemical lab, Thrissur, Kerala. The
dissolved caustic salt reacts with CO2 as part of the purification process [Üresin et al., 2015].
Figure 1. Basic layout of experimental setup for electricity generation from biogas.
624
above atmospheric condition. Sedimentation contents from the caustic scrubbing deposition
had accumulated at the bottom side and regular pH measurements were made using a digital
pH meter. All chemical reactions were carried out multiple times for precision.
A pilot study was carried out with various combinations of biodegradable waste that had
been generated at the university canteen, KVASU, Mannuthy, Kerala. Co-digestion of the veg-
etable leftovers with cow dung, maintained at a constant temperature (37°C), gave the maximum
biogas yield. Hence, this combination was used when evaluating the efficiency of scrubbing units.
The calcium carbonate, iron sulfide, calcium and nitrogen formed during these reactions
do not have much effect on the performance of the biogas and can be removed along with
scrubbing material at regular intervals. Thus, the biogas coming out of scrubber 1 has a lesser
amount of CO2, H2S and NH3 than it does at the inlet [Katare et al. 2016].
625
The third caustic solvent utilized in the study to absorb CO2 is calcium hydroxide (Ca(OH)2).
The reaction of Ca(OH)2 with CO2 is given by Equation 6.
In all the above reactions, it can be seen that CO2 is absorbed by chemicals used in the
scrubber. Thus, the biogas coming out of scrubber II has a lesser amount of CO2 than it does
at the inlet [Leonzio et al. 2016].
The raw biogas is allowed to pass through the scrubbing system. Initial purification was
done using a single column packed unit for the removal of CO2, H2S and NH3, as shown in
[Fig. 2]. The composition of raw biogas (i.e. at the inlet and outlet of the Phase I scrubber) is
as shown in [Table 1] and [Fig. 3].
Scrubbing at Phase I reported a 5% increase in methane and a CO2 removal of 5.8%.
A removal rate of 43% and 37% was observed for H2S and NH3 respectively. Experimental
data from Phase I reported better H2S removal, as compared with [Katare, et al. 2016].
A further purification process was carried out using saturated caustic solution of NaOH,
KOH and Ca(OH)2. The results for CO2 removal are shown in Table 2 and Figure 3. The
caustic absorption process varied with time duration (Fig. 4).
It was found that different caustic solutions gave dissimilar CO2 levels at the scrubber
outlet. A nonlinear graph plotted clearly indicated a variation in the CO2 removal rate, along
with the type of caustic solution, and was dependent on time. Carbon dioxide sequestration
626
Biogas
content Ca(OH)2 NaOH KOH
Figure 5. Variation of composition of CO2 and CH4 with different chemical solution.
627
It can be seen from the findings that saturated caustic solutions are highly suitable for the
continuous purification process. Final results were in accordance with the reported literature.
Results obtained were inferior to those from [Tippayawong & Thanompongchart, 2010]. The
maximum H2S and CO2 removal rate was obtained when compared to the caustic scrubbing
system developed by [Ürsein et al. 2015]. Enhanced H2S absorption was observed, compared
to that stated by [Miltner, M., Makaruk et al. 2012]. Efforts were made to create purified
methane enriched gas for a sustained period. It has been found that absorption capacity of
caustic solution was dropping within a short time period. Hence, it is necessary to replace
the caustic solution is necessary in order to uphold the chemicals at saturation point. Regu-
lar replacement results in concentration instability and the major drawback in consuming
caustic solvents was that they are very problematic to recycle. Although they are compara-
tively cheaper, a huge quantity of chemicals is essential to fulfill the purification process and
to overcome the drop in engine efficiency accounted for by the presence of CO2 in biogas.
Amount utilized in this present study is high for making purified biogas. So, future work is
mandatory to reduce the capital cost in biogas enrichment and its applications.
4 CONCLUSION
Sequestration of CO2, NH3 and H2S from biogas by a multistage scrubber was studied.
Enhanced H2S scrubbing effectiveness has been gained in the Phase 1 scrubber. Meanwhile,
the effectiveness was found to be dropping with time. NaOH, KOH and Ca(OH)2 were used
in the current study and their absorption behaviors were observed. The absorption character-
istics of all caustic solvents indicated similar results. Chemical absorption by caustic solution
was found to be an effective technique for small process time but their absorption capability
weakened quickly with time. Chemical absorption with caustic solution is not advisable as
an alternative for biogas quality upgrade, due to its limitation for being reused. Still, caustic
scrubbing techniques are considered as a low-cost sequestration method for sub-continental
conditions. In addition, capturing CO2 into solid phase, instead of it being released into the
atmosphere, makes the projected enrichment process more environmentally friendly.
REFERENCES
Abdeen, F.R., Mel, M., Jami, M.S., Ihsan, S.I. & Ismail, A.F. (2016). A review of chemical absorption
of carbon dioxide for biogas upgrading. Chinese Journal of Chemical Engineering, 24(6), 693–702.
Cebula, J. (2009). Biogas purification by sorption techniques. Architecture Civil Engineering Environ-
ment Journal, 2, 95–103.
628
629
ABSTRACT: A new composition of precursors was identified using china clay, quartz and
calcium carbonate to fabricate the microfiltration membrane. The membrane was fabricated
by pressing method and sintered at 1000°C. Various characteristics of membrane such
as porosity, average pore size, water permeability and chemical resistance were evaluated.
Energy Dispersive X-ray analysis (EDX) was conducted to identify the elements present in
the membrane. The porosity, water permeability and pore size of membrane are found to be
37%, 2.88 × 10−3 L/m2.h.Pa and 555 nm respectively. Corrosion resistance test indicates that
the membrane can be subjected to acid and alkali based cleaning procedure.
1 INTRODUCTION
In recent years, the preparation of clay based inexpensive ceramic membrane is getting
significant attention due to its cost benefits. The ceramic membrane could be deployed
to highly corrosive medium and high pressure applications. Numerous articles have been
published for the preparation of clay based ceramic membranes. Nandi et al. (2009)
formulated new composition of raw materials using kaolin, quartz, sodium carbonate,
calcium carbonate, boric acid and sodium metasilicate to synthesize circular membrane. The
prepared membrane was deployed for the purification of oil-water emulsions. Abbasi et al.
(2010) utilized kaolin, clay and α-alumina to synthesize mullite and mullite–alumina based
ceramic membranes for the separation of oil emulsions. Similarly, the ceramic membrane was
manufactured using mixture of kaolin, pyrophyllite, feldspar, quartz, calcium carbonate, ball
clay and titanium dioxide by uniaxial compaction method. The membrane was performed
well for the treatment of oil-water emulsions (Monash & Pugazhenthi 2011). Using perlite
materials, Al-harbi et al. (2016) prepared the super-hydrophilic membrane (mean pore size
of 16 µm) for the wastewater treatment applications. In another work, dairy wastewater was
treated using novel tubular ceramic membrane. The membrane was prepared using mixture
of naturally available clays (Kumar et al. 2016). Jeong et al. (2017) used the pyrophyllite and
alumina to prepare composite ceramic membrane (pore size of 0.15 µm). The membrane
performance was investigated for the treatment of low-strength domestic wastewater.
The detailed investigation of above literatures indicates that the clay based membranes
were mainly deployed for wastewater treatment applications. To best of our knowledge, the
applicability of clay based ceramic membrane in biotechnological field is less studied. In this
context, the applicability of clay based ceramic membrane for biotechnological field needs to
be investigated. Such research would be useful to understand upon the suitability of ceramic
membrane for biotechnological applications.
This article addresses the preparation of ceramic membrane using china clay, quartz and
calcium carbonate. Primary characteristics such as porosity, average pore size, water permea-
bility was evaluated. Corrosion resistance test was conducted to identify the suitable cleaning
631
2 EXPERIMENTAL
2.1 Precursors
The raw materials namely china clay, quartz and calcium carbonate were used to develop
ceramic membrane and its composition is presented in Table 1. China clay and quartz were
purchased from Royalty minerals, Mumbai, India. The calcium carbonate was procured from
Loba Chemie, Ltd. The materials were used without any pretreatment.
China clay 50
Quartz 25
Calcium carbonate 25
632
2.3 Characterization
The primary characteristics of membrane such as porosity, water permeability, average
pore size, chemical resistance were evaluated. The porosity of membrane was evaluated
by Archemede’s principle using water as wetting liquid (Nandi et al. 2009). The water flux
(J, L/m2h) of membrane was measured using indigenous continuous dead-end filtration setup
(Fig. 3). The flux was measured at different applied pressure (69–345 kPa) at room tem-
perature (25°C). This involves the measurement of permeate volume at an interval of 5 min
during the total run time of 25 min. The flux was calculated using the relation.
J = Q/A × t. (1)
where, Q is the volume of permeate collected, t is the time and A is effective membrane area
(m2) for permeation.
The water permeability (Lh) and average pore size (rl) of the membrane was determined
from water flux data according to the following expression
ε r 2 ∆P
Jv = = Lh ∆P (2)
8 µl
where Jv (L/m2h) is the water flux through the membrane, ∆P (kPa) is the trans-membrane
pressure drop across the membrane, µ is the viscosity of water, l is pore length, ε is the
porosity of the membrane.
The corrosion resistance of membrane was tested in acid and alkali solutions individually at
different pH levels (1–14) using HCl and NaOH. To do so, the membrane was kept in contact
with acid and alkali solutions for seven consecutive days at room temperature. After that, the
weight loss of membrane was measured that characterizes the corrosion resistance of membrane.
In addition, EDX was performed to confirm the elements present in the membranes.
3.1 Porosity
The porosity was determined by Archimedes principle using water as wetting liquid. Generally,
the pore size of membrane is depending upon the porosity. In this work, the porosity of
membrane is found to be 37%. In this context, it can be pointed out that the obtained porosity
(37%) is comparable even higher than the porosity of cordierite (36%), mullitte (32%) and
kaolin (36%) (Dong et al. 2007, Abbasi et al. 2010, Monash & Pugazhenthi 2011). Thus, it is
inferred that the formulated composition of raw material provides higher porosity.
633
634
Carbon 16.24
Oxygen 58.01
Aluminum 5.95
Silicon 14.85
Calcium 4.95
Properties Values
Porosity (%) 37
Water permeability (L/m2 h Pa) 2.88 × 10-3
Pore size (nm) 555
4 CONCLUSIONS
A new raw material composition was identified for the preparation of ceramic microfiltra-
tion membrane. The porosity, water permeability and pore size of membrane are found to
be 37%, 2.88 × 10−3 L/m2.h.Pa and 555 nm, respectively. Corrosion resistance test indicates
the membrane can be subjected to acid and alkali based cleaning procedure. Henceforth, it
is concluded that the fabricated ceramic membrane possesses very small pore size which is
suitable for various biotechnology applications.
635
The authors wish to express their sincere thanks to DST-SERB for the financial support.
REFERENCES
636
C.Y. Lincy
Department of Chemical Engineering, GEC Thrissur, Kerala, India
Saurabh Sahadev
Department of Chemical Engineering, GEC Thrissur, Kerala, India
ABSTRACT: Carbon felt based phenolic composites were prepared at various densities viz.
0.3, 0.4, 0.5, 0.6 and 0.7 g/cc. The mechanical and thermal characteristics of the composites
were evaluated adopting standard test procedures. With the change in density from 0.3 to
0.7 g/cc, flexural strength enhances from 5.2 to 32 MPa, the compressive strength increases
from 1.1 to 6.9 MPa, tensile strength is pushed up to 225 from 45 KSC while resilience improves
from 1.5 to 1.8 kJ/m2. Over the change of density, thermal conductivity of the composites
marginally increases from 0.1 to 0.2 W/mK. Char yield for the composites as determined by
thermogravimetric analysis was about 60% up to 900°C. Ablative characteristics were deter-
mined through plasma arc jet simulated test procedures at an energy flux of 50 W/cm2 and time
duration of 10 seconds; the heat of ablation was 2044 cal/g, and composite with 0.7 g/cc density
had survived the erosion test with an erosion rate of 0.002 mm/s and a mass loss of 0.005 g.
1 INTRODUCTION
In a launch vehicle momentum for the rocket is gained through conversion of chemical energy
to mechanical energy. In a typical propulsion system, fuel and oxidizer undergo combustion
thereby generating low molecular weight gases at high temperatures (3500 K) and pressure
(200 bar). When this hot high pressure gas is expanded through nozzle, thrust is created and
the rocket experiences forward motion. For most of inner wall portions of the nozzle tem-
perature would be in the range 700–1200 K. All known structural materials cannot survive
under such severe erosive and thermal shock conditions. Thus the metals form nozzle needs
protection from very high speed extremely hot gas streams. Traditionally, highly dense silica/
phenolic and carbon/phenolic composites (1.8 g/cc) are employed as nozzle liners. Ablation
lining thickness is about 8–12 mm. It is envisaged that the fully dense liner is replaced with a
porous, low dense and light weight material with a thick anti erosion coating, a huge reduc-
tion in weight of the nozzle with improved performance as the porous material would have
a much lower thermal conductivity than the fully dense counterparts. In view of the above,
carbon felt impregnated phenolic composites are investigated for their role as light weight
ablative liners. Carbon felt-based ablators have several advantages over classical fully dense
ablative liner materials. Most importantly, they reduce the limited strain response of large
rigid substrates. Carbon felt materials are known for their benign insulating property, uni-
form bulk density and better shape retention properties with macro and micro communica-
tion channels among the cells allowing efficient resin infiltration. Enabling manufacturing in
larger sizes, felt based substrates reduce the number of independent parts mitigating the need
of gap fillers. They also offer improved robustness in absorbing loads and deflections, and
637
638
639
With a density variation from 0.3 g/cc to 0.7 g/cc, the flexural strength changed from 5.2 to
32 MPa, the resilience changed from 1.5 to 2.5 kJ/m2, compressive strength changed from
1.1 to 6.9 MPa and tensile strength changed from 4 to 22 MPa. The compressive and tensile
strength of the composite with 0.7 g/cc density is highly remarkable when compared to the
conventional TPS materials for the given density.
640
641
4 CONCLUSION
The carbon felt phenolic composites exhibit superior mechanical properties. Thermal con-
ductivity values are very small. Low density felt composites are thus attractive materials for
nozzle liner end uses.
REFERENCES
de Almeida, L., Cunha, F., Batista N., Roccol, J., Iha K. & Botelho E. 2014. Processing and charac-
terization of ablative composites used in rocket motors. Reinforced Plastics and Composites 33(16):
1474–1484.
Francesco Panerai, Joseph C. Ferguson Jean Lachaud, Alexandre Martin, Matthew J. Gasch & Nagi N.
Mansour. 2016. Micro-tomography based analysis of thermal conductivity, diffusivity and oxidation
behavior of rigid and flexible fibrous insulators. Heat and Mass Transfer. 108(2017) 801–811.
Ganeshram, V. & Achudhan, M. 2013. Synthesis and characterization of phenol formaldehyde resin as
a binder used for coated abrasives. Science and Technology. (6S), 0974-6846.
Varadarajulu & Rama Dev, R. 2008. Flexural properties of ridge gourd/phenolic composites and glass/
ridge gourd/phenolic hybrid composites. Composite materials. 42, 6/2008.
Vineta, S. & Gordana B. 2009. Composite material based on an ablative phenolic resin and carbon fib-
ers. Serbian chemical society. 74(4): 441–453.
642
Zakir Hussain
Department of Chemical Engineering, Rajiv Gandhi Institute of Petroleum Technology, Jais,
Amethi, Uttar Pradesh, India
ABSTRACT: Fresh water is the component required for the survival of many living
organisms. Though 71% of the earth’s surface is covered by water, only 3.5% is available for
human needs, with the remaining 96.5% in the form of oceans. Large quantities of water are
consumed by industries due to rapid industrialization, thus polluting the fresh water and
further causing its scarcity. To meet the demand, solar energy can be used to convert seawater
into fresh water in a solar distillation unit using a solar still. The solar still should possess
high absorptivity so that it captures the maximum solar energy and converts the seawater
to fresh water. In this paper, the effect of different materials such as black paint, paraffin
wax, coal bed, sand and ceramic packings, as coating materials in various combinations to
the base aluminum basin, was studied in a detailed way. Energy balance and heat transfer
coefficients were estimated for all the cases. Further, the overall efficiency was also estimated
for all the cases and compared. The black paint coated basin (base case) has shown the least
efficiency (42.19%) among all considered cases, and the basin coated with the combination of
black paint, sand and ceramics has shown the highest efficiency (65.06%).
Keywords: desalination, solar stills, heat-absorbing, energy balance, heat transfer coefficient
1 INTRODUCTION
Water is the main source of survival for many living organisms, and humans especially rely
on fresh water for consumption and household purposes. Around 71% of the earth’s surface
is covered with water but only 3.5% is available in the form of fresh water, while 1.7% is in
the form of glaciers, 1.7% is in the form of groundwater and 0.1% is in the form of rivers
and lakes (Sethi & Dwivedi, 2013). Not only humans but for all the life forms, water is an
essential commodity for their survival. The fresh water availability is less but the demand
for it is increasing day by day at a rapid rate, due to the increase in the population and also
due to rapid industrialization. Industrial wastes are disposed directly into the available fresh
water sources, thus polluting them and creating a rapid decline in the availability of the fresh
water (Panchal, 2015). According to the WHO (World Health Organization, 2017), around
25% of the human population does not have the provision of safe drinking water. With the
population growing by 82 million every year, the need for safe drinking water is increasing
day by day and as many as a third of humans will face a shortage of water by 2025 (Omara
& Kabeel, 2014). To mitigate and overcome this fresh water problem, many water purifica-
tion techniques are available for the production of clean water. Solar distillation is one such
technique that converts seawater into potable water. Solar energy is abundantly available in
nature and can be successfully utilized in treating the saline water, thus creating a demand
for solar distillation that is increasing every year. The basic purpose of solar distillation is
643
to provide fresh drinking water from seawater in order to meet the demand for fresh water
(Somanchi et al., 2015). The process of solar distillation occurs in two ways based on the
energy consumption, namely thermal and non-thermal processes. In the solar distillation unit,
there are two types of solar stills that are used based on the energy utilization, namely active
and passive solar stills. Among these, active solar stills need an energy input in the form of
pumps for the input of feed into the system, whereas passive systems do not require energy
input. Hence, passive solar stills are preferred over active as they are more cost-effective. There
are many designs for solar stills, such as a single-slope solar still (Krishna et al., 2016), double
slope solar still (Murugavel et al., 2008), wick-type still (Suneesh et al., 2016), spherical still
(Dhirman, 1988), vertical still (Boukar, 2004), and multi-effect still (Tanaka et al., 2009). The
single-slope solar still provides a better productivity in the winter whereas the double slope
provides better results in the summer (Yadav & Tiwari, 1987). The design and fabrication of
a single-slope solar still is much easier when compared to that of the other designs as this
can be made from locally available materials like wood and polyurethane. Further, the design
involves a low maintenance cost and skilled labor is not required (Gugulothu et al., 2015).
A single-slope solar still is considered in the present work. The inclination of the single-slope
solar still must be equal to the latitude of the experimental location, in order for the maximum
solar irradiance to fall on the still to get the maximum yield (Malaeb et al., 2014). The addition
of a sand bed to the conventional solar still gives a higher productivity than with the normal
conventional solar still; the efficiency is increased from 35% in a conventional solar still, to
49% in a sand-bed solar still with a sand bed of height 0.01 m above the normal conventional
solar still basin (Omara & Kabeel, 2014). The operating principle of the solar still involves the
evaporation of the pure water molecules from the saline water due to the impingement of the
solar energy onto the saline water, evaporating out the water molecules and leaving behind
the dissolved salts and impurities at the bottom of the solar still. The process of impingement
of solar energy occurs by the incidence of the sun’s radiation on the glazing material of the
still. This allows the heat-absorbing material to absorb the radiation and heats up the seawater
present, with the evaporated water condensing and being collected through a collecting chan-
nel (Aburideh et al., 2012). A schematic diagram showing the principle is shown in Figure 1.
A single-slope distillation unit is made up of wood for the external body of the box with
dimensions of 0.715 m in length, 0.415 m in breadth, the height of the smaller edge is
0.125 m, the height of the longer edge is 0.36 m, and with a thickness of 0.01 m. Inside the
box, there is 0.02 m thick insulation that is made up of thermocol sheets. The basin that holds
the seawater is made up of aluminum (length 0.64 m, breadth 0.33 m, and height of 0.085 m)
and is coated with black paint. The glazing material, which is the glass cover that covers the
top of the unit, is 0.004 m thick, with an inclination of 18° that results in a heat-absorbing
area of 0.3154 m2. A collector is attached on the glass cover at the lower end to collect the
condensate. To avoid the glass slipping down and to reduce the vapor losses, a rubber seal is
provided between the wooden box and the glass cover. The base experimental setup is shown
in Figure 2.
644
2.1.4 Channel
The channel is used to collect the condensate that has formed on the surface of the top glass
cover. The materials that can be used are PVC, galvanized steel, and RPF. In the present
study, PVC material of 1 inch diameter is used.
IsAg/Ab = IsrgAg/Ab + qg,sAg/Ab + qh,gAg/Ab + qk,airAk,air/Ab
+ qk,lAk,l/Ab + qk,bAb/Ab + (mcwhsat,g) Ab (2)
where
Is Solar radiation intensity, W/m2;
Ag Area of glass surface, m2;
Ab Area of basin surface, m2;
rg Reflectivity of the glass cover for visible light;
Ak,air Circumferential area of solar still covered by inside moist air, m2;
Ak,l Circumferential area of solar still covered by seawater, m2;
mcw Mass velocity of condensed water, kg/m2.sec;
hsat,g Enthalpy of water at saturation temperature, kJ/kg.
Considering the heat transfer from the cover to the atmosphere by convection:
where Tg is the glass temperature, Ta is the ambient temperature, and hg is the convective heat
transfer coefficient given by the following formula:
where the forced convection coefficient dependent on the wind velocity, v (m/s) = 3.5 m/sec.
The radiative heat transfer from the glass cover to the atmospheric air is given by the
formula:
where
Glass emissivity 0.88;
Constant Cs 5.667 W/m2 K4;
Tsky Sky temperature (Ta-20°C).
The conductive heat transfer from the bottom to the atmosphere may be formulated as:
646
where
δg Thickness of glass;
λg Thermal conductivity of glass (0.96 W/m.K);
δb Thickness of basin;
λb Thermal conductivity of basin (205 W/m-K for aluminum, 1.6 W/m-K for black
paint, 2.05 W/m-K for sand, 0.25 W/m-K for paraffin wax, and 0.33 W/m-K for coal);
δw Thickness of wood;
λw Thermal conductivity of wood (0.17 W/m-K);
δth Thickness of thermocol (20 mm);
λth Thermal conductivity of thermocol (0.036 W/m-K);
ha Convective heat transfer coefficient at ambient temperature, W/m2.K;
Tb Temperature of the basin, K;
Ta Ambient temperature, K.
Considering the heat transfer from the circumferential area of the still by conduction.
From inside moist air to the atmosphere:
where
where
1
) ( )
pw − pgi (Tw + 273) 3
(
hwc = 0.884 Tw − Tgi + (13)
268.9 * 10 − pw
3
5144
Pw = exp 25.317 − , J/K (14)
Tw + 273
5144
Pgi = exp 25.317 − , J/K (15)
Tgi + 273
Pw − Pgi
he = 16.273 * 10 −3 * hwc * (16)
Tw − Tgi
647
The whole experiment was carried out during the months of March and April 2017. The
effects of various materials as coating for the aluminum basin in different combinations were
studied. Further, the effect of treating capacity of the solar still on the overall efficiency,
energy balance and heat transfer coefficients were studied by varying the treating volumes of
the seawater by 2, 2.5, and 3 liters respectively. For all the cases, the overall efficiency, energy
utilized (%) and heat transfer coefficients were estimated.
The effect of sand, ceramic, paraffin wax and coal bed materials, along with black paint,
for varying treating capacities of water, is shown in Figures 3, 4, 5, 6, 7, 8 and 9.
As shown in Figures 3 to 7, the optimum exposed time was found to be 1:30 p.m. for the
materials, black paint (with and without ceramics), black paint (with and without sand),
black paint (with sand + ceramics), and black paint + paraffin wax (with and without sand),
as the maximum incidence of solar irradiance was observed at that time.
As shown in Figures 8 and 9, the optimum exposed time was found to be 12:30 p.m. in the
case of black paint + coal bed (with and without ceramics). The optimum exposed time was
Figure 3. Effect of ceramics on base case. Figure 4. Effect on sand on base case.
Figure 7. Effect of sand on paraffin wax. Figure 8. Effect of coal bed on base case.
648
Figure 11. Energy utilized for 3 liters treating Figure 12. Heat transfer coefficient for 3 liters
volume. treating volume.
pH 8.2 7.6
TDS, ppm 35,430 3
TSS, ppm 30,620 8
COD, ppm 750 6
BOD, ppm 300 2
Chlorides, ppm 17,600 50
Hardness, ppm 6,620 22
649
4 CONCLUSIONS
Desalination of seawater by using the solar distillation unit is the most economical efficient
process and is also the low external energy requirement process. All the materials used for
coating were proved to be suitable materials, with improved efficiency when compared to the
base case (black paint coated solar still). The combination of base solar still basin coated
with sand and ceramics proved to be best in view of its highest efficiency (65.06%). The least
efficiency was given by black paint coated basin (42.19%). The use of paraffin wax (which
is a phase change material) and coal bed proved to be fruitful as the production of distillate
occurred even in the absence of sun until 7:30 p.m.
REFERENCES
Aburideh, H., Deliou, A., Abbad, B., Alaoui, F., Tassalit, D. & Tigrine, Z. (2012). An experimental study
of a solar still: Application on the sea water desalination of Fouka. Procedia Engineering, 33, 475–484.
Boukar, M. & Harmim, A. (2004). Parametric study of a vertical solar still under desert climatic condi-
tions. Desalination, 168, 21–28.
Dhirman, N.K. (1988). Transient analysis of a spherical solar still. Desalination, 69(1), 47–55.
Gugulothu, R., Somanchi, N.S., Devi, R.S.R. & Banoth, H. (2015). Experimental investigations on per-
formance evaluation of a single basin solar still using different energy absorbing materials. Aquatic
Procedia, 4, 1483–1491.
Krishna, P.V., Sridevi, V. & Priya, B.S.H. (2016). Comparative studies on a single slope solar distillation
unit with and without copper electroplating on aluminium basin. International Journal of Advanced
Research, 4(9), 1028–1039.
Malaeb, L., Ayoub, M.G. & Al-Hindi, M. (2014). The experimental investigation of a solar still coupled
with an evacuated tube collector. Energy Procedia, 50, 406–413.
Murugavel, K.K., Chockalingam, K.K. & Srithar, K. (2008). Progresses in improving the effectiveness
of the single basin passive solar still. Desalination, 220(1–3), 677–686.
Omara, Z.M. & Kabeel, A.E. (2014). The performance of different sand beds solar stills. International
Journal of Green Energy, 11(3), 240–254.
Panchal, H.N. (2015). Enhancement of distillate output of double basin solar still with vacuum tubes.
Journal of King Saud University-Engineering Sciences, 27(2), 170–175.
Sethi, A.K. & Dwivedi, V.K. (2013). Exergy analysis of double slope active solar still under forced cir-
culation mode. Science Direct. 51(40–42), 7394–7400.
Somanchi, N.S., Sagi, S.L.S., Kumar, T.A., Kakarlamudi, S.P.D. & Parik, A. (2015). Modelling and
analysis of single slope solar still at different water depth. Science Direct. 4, 1477–1482.
Suneesh, P.U., Paul, J., Jayaprakash, R., Kumar, S. & Denkenberger, D. (2016). Augmentation of distil-
late yield in “V”-type inclined wick solar still with cotton gauze cooling under regenerative effect.
Cogent Engineering, 3(1), 1–10.
Tanaka, H. & Nakatake, Y. (2009). One step azimuth tracking tilted-wick solar still with a vertical flat
plate reflector. Desalination, 235(1–3), 1–8.
World health organization (2017) Progress on drinking-water, sanitation and hygiene, 2017.
Yadav, Y.P. & Tiwari, G. (1987). Monthly comparative performance of solar stills of various designs.
Desalination, 67, 565–578.
650
P.S. Anilkumar
Department of Physics, Indian Institute of Science, Bangalore, Karnataka, India
V.O. Rejini
Department of Chemical Engineering, Government Engineering College, Thrissur, Kerala, India
1 INTRODUCTION
651
2 METHODOLOGY
In this work the main parameters are the Pulse generator, Oscilloscope and the interfacing
language LabVIEW. The Pulse generator used here is a Pico pulse generator with 300 ps
652
rise time and 750 ps fall time. This study will focus on the magnetization reversal in mag-
netic nanowires through electrical transport. The automation of the measurement system
is carried out using Pico pulse generator. With this set up the transport and optimiza-
tion using different pulse widths of electrical pulses is carried out. The statistical analy-
sis of these results are expected to give information about the magnetization processes in
nanowires.
Today, data storage heavily relies on the magnetic hard disc drives in the form of thin film.
The speed of the hard disc depends on the speed of rotation of the disc so that sensor can
read data fastly. The interchanging of north and south poles cannot be done manually. It is
done by Magnetization switching. Magnetic field is required for magnetization switching to
take place.
The replacement of magnetic field assisted reversal or switching by more sophisticated
current induced magnetic reversal is emerging as a viable and attractive option for next gen-
eration technology. This mechanism requires spin polarized electrons from a ferromagnet
to switch the magnetization. The current density required for magnetic switching is
1012 A/m2. So, for reducing the huge current, magnetic material is patterned into nanowires.
But Joule heating will destroy these wires. Hence we are using pulsed current of the order of
nanoseconds.
The above figure is the SEM image of the nanowire used to pass current pulses which is
having width 450 nm, thickness 20 nm and inner diameter 1.86 µm.
653
Figure 6. Input displayed on DPO when 20 ns Figure 7. Output obtained from DPO when
pulse width applied through nanowire. 20 ns pulse width applied through nanowire.
Figure 8. Input displayed on DPO when 30 ns Figure 9. Output obtained from DPO when
pulse width applied through nanowire. 30 ns pulse width applied through nanowire.
Figure 10. Input displayed on DPO when 40 ns Figure 11. Output obtained from DPO when
pulse width applied through nanowire. 40 ns pulse width applied through nanowire.
654
Figure 14. Input displayed on DPO when 60 ns Figure 15. Output obtained from DPO when
pulse width applied through nanowire. 60 ns pulse width applied through nanowire.
Figure 16. Input displayed on DPO when 70 ns Figure 17. Output obtained from DPO when
pulse width applied through nanowire. 70 ns pulse width applied through nanowire.
Figure 18. Input displayed on DPO when 80 ns Figure 19. Output obtained from DPO when
pulse width applied through nanowire. 80 ns pulse width applied through nanowire.
655
In this work, a Pico pulse generator was studied, programmed and interfaced with the con-
troller. Also the rise time and fall time of the pulse generator was verified by passing current
pulses with 10 ns to 80 ns pulse width range. The pulse output shape and distortion were
understood from a DPO (digital phosphor oscilloscope) which is interfaced with the control-
ler. DPO was also programmed for this study. At first, when BNC cables were used for the
transmission of the current pulses, nearly 100% transmission was obtained. But when devices
consisting of a nanoring, that is, a nanowire was connected it was found that only 10% of
the signal gets transmitted in the pulse width range from 10 ns to 80 ns. So it has been real-
ized that this transmission is sufficient to achieve current induced magnetization reversal in
nanowire structures. For future scope we can say that under these conditions, we will be able
to image magnetic domain and the velocity of the magnetic domain walls at higher current
density.
REFERENCES
[1] Masamitsu Hayashi, Luc Thomas, Rai Moriya, Charles Rettner and Stuart S.P Parkin (2008),
Current-Controlled Magnetic Domain-Wall Nanowire Shift Register, Science, Vol. 320, 209–211.
[2] J.E. Wegrowe, D. Kelly, Y. Jaccard, Guittenne and Ansermet (1998), Current Induced Magnetic
reversal in magnetic nanowires, Europhysics Letters, 45, 626–631.
[3] Tao Yang, Takashi Kimura and Yoshichika Ottani (2008), Giant spin accumulation signal and pure
spin-current induced reversible magnetization switching, Nature Physics, Vol. 4, 851–854.
[4] E.A. Rando and S. Allende (2015), Magnetic reversal modes in multisegmented nanowire arrays
with long aspect ratio, Vol. 118, 013905-1 to 013905-8.
[5] A. Thiaville, Y. Nakatani, J. Miltat, Y. Suzuki (2005), Micromagnetic understanding of current-
driven domain wall motion in patterned nanowires, Europhysics Letters, Vol. 69, 990–996.
656
1 INTRODUCTION
Iron oxides are common compounds, which are widespread in nature and can be readily syn-
thesized in the laboratory. Iron oxide nanoparticles find applications in the biomedical sec-
tor (in cellular labeling, cell separation, detoxification of biological fluids, tissue repair, drug
delivery, Magnetic Resonance Imaging (MRI), hyperthermia and magneto faction) (Gupta &
Gupta, 2005), electrode materials (Kijima et al., 2011), fabrication of pigments, sorbents, gas
sensors (Afkhami & Moosavi, 2010), ferro fluids and wastewater purification (Shen, 2009a).
Eight iron oxides are known. Among these, hematite (α-Fe2O3), magnetite (Fe3O4) and
maghemite (γ-Fe2O3) are very promising and popular candidates due to their polymor phism
involving temperature-induced phase transition. These three iron oxides have unique prop-
erties (such as biochemical, magnetic, and catalytic), which make them suitable for specific
technical and biomedical applications (Cornell & Schwertmann, 2003). Maghemite is fer-
rimagnetic at room temperature but its nanoparticles smaller than 10 nm are superparamag-
netic. Maghemite is unstable at high temperatures and loses it susceptibility with time (Ray
et al., 2008; Neuberger et al., 2005). The structure of γ-Fe2O3 is cubic. Oxygen anions give rise
to a cubic close-packed array while ferric ions are distributed over tetrahedral sites (eight Fe
ions per unit cell) and octahedral sites (the remaining Fe ions and vacancies). Therefore, the
maghemite can be considered as being fully oxidized magnetite, and it is an n-type semicon-
ductor with a band gap of 2.0 eV (Wu et al., 2015).
Particle agglomeration forms large clusters, resulting in an overall increase in particle size
(Hamley, 2003). When two large-particle clusters approach one another, each of them comes
under the influence of the magnetic field of its neighbor. Besides the arousal of attractive
forces between the particles, each particle is within the magnetic field of the neighbor and
becomes further magnetized (Tepper et al., 2003). The adherence of remnant magnetic particles
causes a mutual magnetization, resulting in increased aggregation properties. Since particles
are attracted magnetically in addition to the usual flocculation due to Van der Waals forces,
surface modification is often indispensable. For effective stabilization of iron oxide nanopar-
ticles, often a very high requirement of density for coating is desirable. Some stabilizer such as
657
2 EXPERIMENTAL
Synthesis of nanoparticles was carried out using a coprecipitation process based on work
by Yunabi et al. (2008), Shen et al. (2009b), and Maity and Agrawal (2007). The modified
procedure adopted is briefly stated as follows.
A solution of 50 ml of ferrous chloride tetrahydrate was added to 100 ml of ferric chloride
hexahydrate solution under constant agitation. The contents were heated to 50°C in a Borosil
glass round-bottom reaction vessel. The 500 ml capacity reactor was specially designed for
the purpose. Ammonium hydroxide solution was added drop-wise at the specified tempera-
ture. The initially black-brown colored solution formed changed into a completely black
mixture after about half an hour at a pH of 9. The magnetic particles so formed were kept
dispersed by an ultra-sonicator for about a further 15 minutes and then collected by settling
under a ferro magnet. The particles were then transferred into a separate beaker and washed
with water until the pH reached 7. Then the particles were washed twice with ethanol. The
collected nanoparticles were dried in a vacuum oven at 60°C for about six hours.
PEG-4000 was used as surfactant to stabilize the particles in aqueous medium. In order to
observe the effect of adding different amounts of PEG-4000 upon the final particle size, dif-
ferent 0.5 g, 1.0 g, 1.5 g, and 2.0 g amounts of it were used. It is pertinent to mention the fact
that an increase of surfactant time and amount would regulate the growth of the particles.
Four of the γ-maghemite NPs (S1, S2, S3, and S4) were obtained by adding PEG-4000 sur-
factant amounts as 0.5 g, 1 g, 1.5 g, and 2.0 g respectively. Crystallite size, magnetic properties
and surface area of these NPs were determined.
The size of the particles and their shape were observed by Transmission Electron Micros-
copy (TEM). The type of crystal was verified by X-Ray Diffraction (XRD) study after com-
parison of the same with the JCPDS file. Standard patterns for bulk magnetite, maghemite,
and hematite are respectively given by JCPDS file numbers. 19-0629, 39-1346 and 83-0664.
The crystallite size was calculated by using the Scherrer equation (Paterson, 1939). The
magnetic properties of the nanoparticles were measured by Superconducting Quantum
Interference Device (SQUID) measurements (Tepper et al., 2003). A Malvern Dynamic Light
Scattering (DLS) nanosizer was used to analyze the size of the particles in solution.
658
Figure 2. DLS images: (a) S1; (b) S2; (c) S3; (d) S4.
Figure 4. Hysteresis curves: (a) S1; (b) S2; (c) S3; (d) S4.
the PEG-4000 surfactant amount increases, the size of the NPs and saturation magnetization
value decrease. The saturation magnetization value of bulk maghemite is about 73 emu/gm
(Maity & Agrawal, 2007), and the above values compare well with it. Figure 4 below shows
the hysteresis curves of S1, S2, S3, and S4. All the magnetic hysteresis loops have passed the
grid origin, which indicates good super paramagnetism.
4 CONCLUSION
In the given technique, four sizes of γ-maghemite NPs, viz. 40, 23, 15, and 11 nm, were
obtained by using PEG-4000 surfactant amounts of 0.5 g, 1 g, 1.5 g, and 2.0 g respectively
under the given conditions. The crystallite size, magnetic properties and surface area of these
NPs were determined. PEG-4000 surfactant was successful in avoiding the huge agglomera-
tion of NPs in solution. SQUID measurements showed the highest saturation magnetiza-
tion value to be 68 emu/gm at room temperature when the surfactant quantity was 0.5 g. As
660
ACKNOWLEDGMENT
The author acknowledges the support received from National Institute of Technology, Srina-
gar, Kashmir for facilitating the work.
REFERENCES
Afkhami, A. & Moosavi, R. (2010). Adsorptive removal of Congo red, a carcinogenic textile dye, from
aqueous solutions by maghemite nanoparticles. J. Hazard. Mater., 174(1–3), 398–403.
Cornell, R.M. & Schwertmann, U. (2003). The iron oxides: Structure, properties, reactions, occurrences
and uses (2nd ed.). Weinheim: Wiley.
Gupta, A.K. & Gupta, M. (2005). Synthesis and surface engineering of iron oxide nanoparticles for
biomedical applications. Biomaterials, 26(18), 3995–4021.
Hamley, I.W. (2003). Nanotechnology with soft materials. Angew Chem. Int. Ed. Engl, 42(15),
1692–1712.
Kijima, N., Yoshinaga, M., Awaka, J. & Akimoto, J. (2011). Microwave synthesis, characterization, and
electrochemical properties of α-Fe2O3 nanoparticles. Solid State Ionics, 192(1), 293–297.
Li, J.K., Wang, N. & Wu, X.S. (1997). A novel biodegradable system based on gelatin nanoparticles and
poly (lactic-co-glycolic acid) microspheres for protein and peptide drug delivery. Journal of Pharma-
ceutical Sciences, 86(8), 891–895.
Massia, S.P., Stark, J. & Letbetter, D.S. (2000). Surface-immobilized dextran limits cell adhesion and
spreading. Biomaterials, 21(22), 2253–2261.
Maity, D. & Agrawal, D. (2007). Synthesis of iron oxide nanoparticles under oxidizing environment and
their stabilization in aqueous and non-aqueous media. Journal of Magnetism and Magnetic materials,
308(1), 46–55.
Mendenhall, G.D., Geng, Y. & Hwang, J. (1996). Mendenhall, G.D., Geng, Y., & Hwang, J. (1996).
Optimization of long-term stability of magnetic fluids from magnetite and synthetic polyelectrolytes.
Journal of Colloid and Interface Science, 184(2), 519–526.
Miller, E.S., Peppas, N.A. & Winslow, D.N. (1983). Morphological changes of ethylene/vinyl acetate-
based controlled delivery systems during release of water-soluble solutes. Journal of Membrane Sci-
ence, 14(1), 79–92.
Neuberger, T., Schöpf, B., Hofmann, H., Hofmann, M. & Von Rechenberg, B. (2005). Superparamag-
netic nanoparticles for biomedical applications: Possibilities and limitations of a new drug delivery
system. Journal of Magnetism and Magnetic materials, 293(1), 483–496.
Paterson, A.L. (1939). The Scherrer formula for X-ray particle size determination. Physical Review,
56(10), 978–982.
Ray, I., Chakraborty, S., Chowdhury, A., Majumdar, S., Prakash, A., Pyare, R. & Sen, A. (2008). Room
temperature synthesis of γ-Fe2O3 by sonochemical route and its response towards butane. Sens Actua-
tors B Chem, 130(2), 882–888.
Ruiz, J.M. & Benoit, J.P. (1991). In vivo peptide release from poly (DL-lactic acid-co-glycolic acid)
copolymer 5050 microspheres. J Control Release, 16(1–2), 177–185.
Shen, Y.F., Tang, J., Nie, Z.H., Wang, Y.D., Ren, Y. & Zuo, L. (2009a). Preparation and application of
magnetic Fe3O4 nanoparticles for wastewater purification. Separation and Purification Technology,
68(3), 312–319.
Shen, Y.F., Tang, J., Nie, Z.H., Wang, Y.D., Ren, Y. & Zuo, L. (2009b). Tailoring size and structural
distortion of Fe3O4 nanoparticles for the purification of contaminated water. Bioresource Technol-
ogy, 100(18), 4139–4146.
Tepper, T., Ilievski, F., Ross, C.A., Zaman, T.R., Ram, R.J., Sung, S.Y. & Stadler, B.J.H. (2003). Faraday
activity in flexible maghemite/polymer matrix composites. J Appl Phys, 93(10), 6948–6950.
Wu, W., Wu, Z., Yu, T., Jiang, C. & Kim, W.S. (2015). Recent progress on magnetic iron oxide nanopar-
ticles: Synthesis, surface functional strategies and biomedical applications. Sci Technol Adv Mater,
16(2), 023501.
Yunabi, Z., Zumin, Q. & Huang, J. (2008). Preparation and analysis of Fe3O4 magnetic nanoparticles
used as targeted drug carriers. Chinese Journal of Chemical Engineering, 16(3), 451–455.
661
1 INTRODUCTION
663
2 EXPERIMENTAL
The HTC of biomass material to fine-powdered form is carried out in an autoclave reac-
tor, usually made up of alloy steel. Controlled heating by a temperature controller is car-
ried out by means of a heater of suitable power rating. The reactor is properly sealed and
head-tightened for leak-proof operation. Under inert atmosphere, heating is started, and
the autogenic pressure builds up. At the desired temperature, the carbonization reaction is
allowed to proceed for the holding time for a suitable duration. The heating is then stopped
and the reactor allowed to cool to room temperature. The carbonized solids are then sepa-
rated from the liquid phase by means of vacuum filtration. The solid-product hydrochar is
dried in an oven to remove any residual moisture. Dried HC is weighed and processed for
further characterization. The schematic diagram of the experimental setup for HTC conver-
sion is illustrated in Figure 1.
Ozcimen et al. (2008) performed carbonization experiments on grape seed and chestnut shell
samples, having the average particle size of 0.657 mm and 0.377 mm respectively, to deter-
mine the effect of temperature, sweep gas flow rate and heating rate on the biochar yield. It
was found that the temperature had the dominant effect on the biochar yields, as compared
to the effects of nitrogen gas flow rate and heating rate.
Sevilla and Fuertes (2009) produced highly functionalized carbonaceous materials by
means of the HTC of cellulose at temperatures in the range of 220–250°C. They observed
that the materials so formed were composed of agglomerates of carbonaceous microspheres
(size 2–5 µm) as evidenced by scanning electron microscopy.
Funke and Ziegler (2010) elaborated the reaction mechanisms of hydrolysis, dehydration,
decarboxylation, aromatization, and condensation polymerization during HTC. The mecha-
nisms were important in studying the role of different operational parameters qualitatively
for cellulose, peatbog and wood. The results were used to derive fundamental process design
improvements for HTC.
Anastasakis and Ross (2011) subjected the brown macro-alga Laminaria saccharina
to hydrothermal carbonization conversion for the generation of solid and liquid biofuels.
Experiments were performed in a batch bomb-type stainless steel reactor (75 ml). The heat-
ing rate of the reactor was 25 K/min. The reactor was charged with the appropriate amounts
of seaweed biomass and water. In the catalytic runs, an appropriate amount of KOH was
added to the reactants. The influence of reactor loading, residence time, temperature and
catalyst (KOH) loading were assessed. The experimental conditions were found to have a
664
4 CONCLUSION
HTC in coming days is to play a great role in the transformation of waste biomass into ver-
satile products that may for instance be in the form of fuels, adsorbents, anode in Li-ion bat-
teries, and soil enrichment agents. We need to carry out an exhaustive study of biomass and
its types that are available in India so that if the need arises, the hydrothermal carbonization
may be used for transformation into various useful products.
666
The author acknowledges the support received from National Institute of Technology, Srinagar,
Kashmir for facilitating the work.
REFERENCES
Anastasakis, K., & Ross, A.B. (2011). Hydrothermal liquefaction of the brown macro-alga Laminaria
saccharina: effect of reaction conditions on product distribution and composition. Bioresource tech-
nology, 102(7), 4876–4883.
Benavente, V., Calabuig, E., & Fullana, A. (2015). Upgrading of moist agro-industrial wastes by hydro-
thermal carbonization. Journal of Analytical and Applied Pyrolysis, 113, 89–98.
Bergius, F. (1996). Chemical reactions under high pressure. Nobel Lectures, Chemistry 1922–1941,
244–276.
Danso-Boateng, E., Shama, G., Wheatley, A.D., Martin, S.J., & Holdich, R.G. (2015). Hydrothermal
carbonisation of sewage sludge: Effect of process conditions on product characteristics and methane
production. Bioresource technology, 177, 318–327.
Danso-Boateng, E., Holdich, R.G., Martin, S.J., Shama, G., & Wheatley, A.D. (2015). Process ener-
getics for the hydrothermal carbonisation of human faecal wastes. Energy Conversion and Manage-
ment, 105, 1115–1124.
Eibisch, N., Helfrich, M., Don, A., Mikutta, R., Kruse, A., Ellerbrock, R., & Flessa, H. (2013). Pro
perties and degradability of hydrothermal carbonization products. Journal of environmental qual-
ity, 42(5), 1565–1573.
Funke, A., & Ziegler, F. (2010). Hydrothermal carbonization of biomass: a summary and discussion of
chemical mechanisms for process engineering. Biofuels, Bioproducts and Biorefining, 4(2), 160–177.
Guiotoku, M., Rambo, C.R., & Hotza, D. (2014). Charcoal produced from cellulosic raw materi-
als by microwave-assisted hydrothermal carbonization. Journal of Thermal Analysis and Calori
metry, 117(1), 269–275.
Hoekman, S.K., Broch, A., & Robbins, C. (2011). Hydrothermal carbonization (HTC) of lignocellulosic
biomass. Energy & Fuels, 25(4), 1802–1810.
Hu, B.B., Wang, K., Wu, L., Yu, S.H., Antonietti, M. & Titirici, M.M. (2010). Engineering carbon mate-
rials from the hydrothermal carbonization process of biomass. Advanced Materials, 22(7), 813–828.
Lin, Y., Ma, X., Peng, X., Hu, S., Yu, Z., & Fang, S. (2015). Effect of hydrothermal carbonization tem-
perature on combustion behavior of hydrochar fuel from paper sludge. Applied Thermal Engineer-
ing, 91, 574–582.
Lin, Y., Ma, X., Peng, X., Yu, Z., Fang, S., Lin, Y., & Fan, Y. (2016). Combustion, pyrolysis and char
CO2-gasification characteristics of hydrothermal carbonization solid fuel from municipal solid
wastes. Fuel, 181, 905–915.
Liu, Z., & Balasubramanian, R. (2012). Hydrothermal carbonization of waste biomass for energy gen-
eration. Procedia Environmental Sciences, 16, 159–166.
Mau, V., Quance, J., Posmanik, R., & Gross, A. (2016). Phases’ characteristics of poultry litter hydro-
thermal carbonization under a range of process parameters. Bioresource technology, 219, 632–642.
Nizamuddin, S., Mubarak, N.M., Tiripathi, M., Jayakumar, N.S., Sahu, J.N., & Ganesan, P. (2016).
Chemical, dielectric and structural characterization of optimized hydrochar produced from hydro-
thermal carbonization of palm shell. Fuel, 163, 88–97.
Özçimen, D., & Ersoy-Meriçboyu, A. (2008). A study on the carbonization of grapeseed and chestnut
shell. Fuel Processing Technology, 89(11), 1041–1046.
Pruksakit, W., & Patumsawad, S. (2016). Hydrothermal Carbonization (HTC) of Sugarcane Stranded:
Effect of Operation Condition to Hydrochar Production. Energy Procedia, 100, 223–226.
Reza, M.T., Wirth, B., Lüder, U., & Werner, M. (2014). Behavior of selected hydrolyzed and dehydrated
products during hydrothermal carbonization of biomass. Bioresource technology, 169, 352–361.
Reza, M.T., Yan, W., Uddin, M.H., Lynam, J.G., Hoekman, S.K., Coronella, C.J., & Vásquez, V.R.
(2013). Reaction kinetics of hydrothermal carbonization of loblolly pine. Bioresource technol-
ogy, 139, 161–169.
Román, S., Nabais, J.M.V., Laginhas, C., Ledesma, B., & González, J.F. (2012). Hydrothermal car-
bonization as an effective way of densifying the energy content of biomass. Fuel Processing Technol-
ogy, 103, 78–83.
Sevilla, M., & Fuertes, A.B. (2009). The production of carbon materials by hydrothermal carbonization
of cellulose. Carbon, 47(9), 2281–2289.
667
668
M.S. Sinith
Rajiv Gandhi Institute of Technology, Kottayam, Kerala, India
ABSTRACT: For musical signals, a waveform of a single note has a repeating element, as
it contains fundamental frequency and its harmonics. A wavelet designed specifically for a
musical instrument by taking this waveform as the scaling function can be used to analyze
these musical signals. Since the waveform of a single note which is used as a scaling func-
tion does not satisfy orthogonality property, they can be designed as biorthogonal wavelets.
In this paper, the filter bank coefficients corresponding to this wavelet are derived from the
available analysis low-pass coefficients using the properties satisfied by biorthogonal wave-
let. The musical signals can be decomposed and reconstructed using this set of filter bank
coefficients. The coefficients thus obtained are modified using lifting technology for better
performance. The lifting scheme is an approach to construct so-called second generation
wavelets, which are not necessarily transalates and dilations of one function. Signal being
corrupted with noise is found to be a major problem in signal processing. The musical signals
are denoised using classical wavelets and two sets of filter bank coefficients obtained using
the two methods. The denoising is performed by adopting a proper thresholding method.
For the performance comparison and measurement of quality of denoising, the Signal to
Noise Ratio (SNR) is calculated between original musical signal and the denoised signal. It is
found that coefficients give better performance once modified using lifting technology.
1 INTRODUCTION
Transmitted signals are mostly corrupted by noise. Once corrupted by noise, a signal loses
its pure signal characteristics. Recovering these characteristics from the corrupted signal is a
major challenge in the signal processing area. The wavelet transform technique is a widely used
method to denoise signal since it gives better results. Wavelet transform replaces Fourier trans-
form in analyzing non-stationary signals. It analyzes a signal by truncating the signal using a
window which has variable time frequency resolution called a wavelet. Daubechies introduced
the wavelet transform as a tool that cuts up data or functions or operators into different fre-
quency components, and then studied each component with a resolution matched to its scale
[1]. Eventhough wavelet analysis replaces Fourier analysis, it is a natural extension of it. Wave-
lets have been called a mathematical microscope; compressing wavelets increases the magnifica-
tion of this microscope, enabling us to take a closer look at small details in the signal [2]. The
theory of wavelet analysis and design of the filter bank coefficients are given in [3]. The signals
produced by musical instruments are found to be non stationary signals where small duration
signals or small band-width musical pieces are placed at an effective temporal position to give
special effects. Wavelet transform serves as a good technique to analyse those signals. In the
671
∞
ψ (t ) = ∑ h( n)φ (2t − n)
n = −∞
(2)
where
The method of generation of f(t) is as shown in Figure 1. The input is an impulse function
and after a few iterations the output obtained will be the scaling function, f(t), as per the
Equations given above.
In the case of classical wavelets, the analysis and synthesis filter bank coefficients are the
same, since at a given scale the shifted versions of the scaling function and wavelet function
are orthogonal to each other. The conditions are true for the case of standard wavelets like
Daubechies and Morlet. However in the case of biorthogonal wavelets, analysis and synthesis
672
filter bank coefficients are not the same. Since the waveform of the single note of the musi-
cal signal which is used as the scaling function does not satisfy the orthogonality property
they are designed to be biorthogonal wavelets. Hence the remaining analysis and synthesis
filter bank coefficients can be found out using the properties of biorthogonal wavelet. Using
the biorthogonal wavelet and scaling function coefficients, faithful reconstruction can be
obtained. Inorder to get more accurate results the the filter bank coefficients can be modified
using lifting technique [16]. Lifting results in a new wavelet with enough vanishing moments.
Denoising of noisy signals can be seen as an important application of any wavelet designed
specifically for that signal. Wavelet based denoising is done by the thresholding of wavelet
coefficients. In wavelet analysis high amplitude coefficients mainly represent signal and low
amplitude coefficients with randomness represent noise. If a signal has its energy concentrated
in a small number of wavelet coefficients, its coefficients will be large compared to any other
signal or noise that has its energy spread over a large number of coefficients. Denoising is
achieved by selecting an appropriate threshold for such high amplitude coefficients [17]–[18].
The rest of the paper is organised as follows. Section II briefly explains filter bank theory.
Section III explains the need for biorthogonal wavelets and their design methodology. The
method of modifying the wavelets using lifting technology is given in section IV. Section V
explains the denoising of musical signals. The simulation results are given in Section VI. The
paper is concluded in Section VII.
Wavelet decomposition and reconstruction of a musical signal based on the multi resolution
theory can be obtained using digital FIR. Figure 2 shows the filter bank implementation
of wavelets. The filters h(n) and g(n) are analysis (decomposition) filters which are low pass
and high pass respectively. They are downsampled by two. The high frequency coefficients
are detailed coefficients and low frequency coefficients are approximation coefficients. Since
musical signals need biorthogonal wavelet, the synthesis reconstruction filters are dual of the
analysis filters. They are denoted as h( n ) and g ( n ) which are preceded by upsampling by
two to give the reconstructed signal.
In the case of musical signals, the repeating elements are such that the scaling function
obtained using them does not satisfy the orthogonal condition. They therefore need to be
designed as biorthogonal. The analysis and synthesis filter coefficients are different in the
biorthogonal case. In biorthogonal wavelets, there is a dual scaling function in addition to the
scaling function generated by h(n). It is denoted by φ(t ). In the case of orthogonal wavelets
f(t) is orthogonal to its own translates whereas in the case of biorthogonal wavelets, f(t) is
orthogonal to the translates of φ(t ) . Similarly, f(t) is orthogonal to ψ(t) for ordinary wave-
lets. But f(t) is orthogonal to ψ (t ) for biorthogonal wavelets.
Mathematically biorthogonal wavelets imply,
673
ψ (t ) = ∑ g ( n )φ (2t − n ) (6)
n
ψ (t ) = ∑ g ( n )φ(2t − n ) (7)
n
where
and
The filter coefficients h(n), are obtained using MPSO algorithm. The coefficients for the dual
scaling function, h( n ) , are designed from h(n) so that they satisfy the following conditions.
Normality of the dual scaling function:-
∑ h( k ) = 2 ∀ k ∈ Z
k
(10)
h( n ) is obtained by solving the Equation (10), Equation (11) and Equation (12). The obtained
h(n) and h( n ) values are used to design g(n) and g ( n ) as shown in Equation (8) and Equation
(9). These values are substituted in Equation (6) and Equation (7) to get the biorthogonal
wavelet functions.
4 LIFTING TECHNOLOGY
The lifting scheme is an approach to construct so called second generation wavelets, which
are not necessarily transalates and dilations of one function [16]. In the present work, lifting
scheme in the Z domain is used for making new set of filter coefficients from the existing
wavelet. It works in spatial domain. Using the lifting technology, a set of filter coefficients
can be modified into a new set of filters without affecting the perfect reconstruction property.
Figure 3 shows the basic idea of lifting. Filter coefficients u(n) in the left part of the diagram
modifies the high pass filtered signal by adding it a weighted sum of low pass filtered signal
coefficient. On the right part of the diagram u(n) nullifies this change by subtracting the same
quantity. In the left part of the diagram u(n) modifies the detail coefficients there by modify-
ing high-pass analysis filter coefficients. In the right side u(n) performs the undo operation
giving the highpass filtered signal back. This results in a set of new analysis low-pass filter
coefficients and synthesis highpass filter coefficients. Lifting of SSM wavelets is described as
follows.
674
Initially h(n) and g ( n ) are taken as such. g(n) and h( n ) are taken as Haar wavelet coef-
ficients. ie, g(n) = [1 −1] and h( n ) = [1] . In the Z domain, the four filter coefficients can be
written as
H ( z ) = h1 ( n ) + h2 ( n )z −1 + … + hK ( n )z − K (13)
−1
G(z) = 1 − z (14)
H ( z ) = 1 + z −1 (15)
G ( z ) = g1 ( n ) + g2 ( n )z −1 + … + gK ( n )z − K (16)
Applying the moment conditions to above equations, the constants α1 and α2 can be
found out and the new wavelet function is obtained. From this modified analysis highpass
filter, gnew(n) can be obtained. The equation connecting gnew(n) and u(n) in Z domain is as
follows:
G new ( z −1 ) = G ( z −1 ) + U ( z 2 )H ( z −1 ) (18)
u(n) can be found out from above equation so that modified h new ( n ) can be obtained as
follows:
Consider a clean musical signal x(n) of length N is corrupted by a additive white gaussian
noise denoted by w(n). Then the noisy signal v(n) is given by
v( n ) = x ( n ) + ω ( n ) (20)
Here the idea is to recover back the signal x(n) from this noisy signal. For a particular
musical instrument, the most suited wavelet will have maximum energy concentrated in
the approximation coefficients rather than in the detailed coefficients. Hence only detailed
coefficients are denoised. The denoising is done by proper thresholding of detailed coefficients
obtained after decomposition of the noisy signal. Hence denoising can be viewed as three
steps.
1. Decomposition of the noisy signal
2. Thresholding of the Detailed Coefficients
3. Reconstruction of the original signal
675
λ j = σ j 2 log( N ) / N (21)
where MAD is the median absolute value of the wavelet coefficients at level j.
υ − λ , if | υ j ,k | ≥ λ
υ j ,k = j ,k (23)
0, otherwise
6 SIMULATION RESULTS
676
1. Using standard wavelets: The set of flute signals are denoised using standard wavelets.
Three wavelets families, Symlets 2 to 8, Daubechies 2 to 10 and Coiflet 1 to 5 are taken.
The result shows that Coiflet5 gives best denoising performance.
2. Using biorthogonal SSM wavelets: The set of flute signals are denoised using biorthogonal
SSM wavelets. The flute signal corrupted with noise is first decomposed to approximation
and detail coefficients. The threshold is calculated as per Eqn. (21) and Equation (22).
The detail coefficients are thresholded by using equation (23). Finally the signal is recon-
structed back. Figure 6 shows the result.
677
3. Using modified SSM wavelets after lifting: The set of flute signals are denoised using
modified SSM wavelets using the same method described above. Figure 7 shows the result.
The denoised signal is more similar to the original signal compared to the result in Figure 6.
678
In order to compare the quality and performance of the denoised signal using two set of
filter coefficients, the signal to noise ratio of the signal is determined. SNR is calculated as
follows.
N −1
∑ xk2
SNR = 10 log N −1 k = 0 (24)
2
∑ k ( x − x k )
k =0
where x is the original signal and x is the denoised signal. The value of SNR obtained are as
shown in the Table 3. Modified SSM wavelet gives the highest SNR.
8 CONCLUSION
Musical signals are analyzed using a wavelet specifically designed for these signals. Using the
already available analysis low-pass coefficients of this wavelet, biorthogonal SSM analysis
and synthesis filter coefficients are found out by adopting the properties satisfied by the
biorthogonal wavelets. Using these coefficients, the musical signal is decomposed and recon-
structed. The coefficients are modified using lifting technology with an aim to improve the
reconstruction performance. The resulting reconstructed signal shows more similarity to the
original signal than the previous one. Denoising of musical signals is performed using these
filter bank coefficients. An appropriate threshold for removing the detail coefficients corre-
sponding to noise is calculated. The noisy signal is denoised using standard wavelets. Among
the standard wavelets, Coiflet5 is found to be the suitable wavelet for denoising a musical
signal. It is able to reduce the noise in the noisy musical signal when it is denoised using the
two sets of SSM coefficients obtained. The result shows that applying lifting improves the
denoising performance.
REFERENCES
679
680
1 INTRODUCTION
681
In this section, we discuss an algorithm to construct the encoding matrix which linearly trans-
forms the K-tuple symbols from the ZMK message symbol space into an ZMK coded symbol
space. This transformation includes all the possible K-tuple symbols in the construction of
the transformation matrix (i.e., circulant encoding matrix). For example, K-tuple symbols
with the symbol set size of M, there are MK possible encoding matrices. Hence, this will expo-
nentially increase the complexity with K, in the construction of an optimum encoding matrix
to map the message symbols onto the coded symbols. The simplest way to find K linearly
independent code vectors is to construct the circulant matrices using all possible symbols,
which significantly reduces the complexity of the computer searches (Natarajan et al. 2015a).
In order to span the K-tuple message symbol space over modulo M, we consider a
{ }
symbol set, ZM − 2M , − ( M2 − 2 ) , …, 0, …, ( M2− 2 ) for even values of M, and for odd values of M,
{ }
ZM − ( M2 −1) , − ( M2 − 3 ) , …, 0, …, ( M2−1) . The ZM has the structure of the commutative ring with
addition and multiplication performed over an integer modulo M. A unit, U(M), of the
defined set ZM, is the odd integers in the set ZM when M is even, and even integers when M is
682
The injectivity of the linear encoder X gives the unique decodability at the receiver side. A lin-
ear index code is completely characterized by the matrix C whose rows are K generators c1, c2, …,
cK. The encoding matrix C defines a linear transformation in which the matrix multiplies with
message symbols to form the codewords. Thus the encoder mapping is injective if and only if C
is invertible. The matrix is constructed with the determinant as the unit of the symbol set ZM.
o1 e1
= o1.o1 − e1.e1 = o − e = odd number (1)
e1 o1
o1 e1 e2
e2 o1 e1 = odd number (2)
e1 e2 o1
683
−2 −3
C=
−3 −2
The determinant for the encoding matrix is, −5 mod 8 = −1, which is in the unit of the set
ZMK . Therefore, this linear encoder is injective. The labelling scheme for a two dimensional
64- QAM constellation is shown in Figure 2.
3.2 Complexity
In an existing algorithm (Natarajan et al. 2015a), in order to reduce the complexity in the
exhaustive search space, the authors have chosen the circulant matrices whose determinant is
in unit of a set ZM as the encoding matrices, which provides the largest minimum Euclidean
distance between the coded symbols. As discussed in Section 3.1, the proposed algorithm
minimizes the complexity by reducing the computer searches significantly. For example, for
K = 2 and M = 32, there are 322 = 1024 possible symbols (i.e., encoding matrices). The number
of searches, computed from the simulation, in the proposed algorithm is 257, whereas in
(Natarajan et al. 2015a) it is, K × 257 = 2 × 257 = 514.
Figure 2. The 64-QAM constellation. The eight points forming the subcode corresponding to the side
information when w1 = −4 are highlighted with circles and the subcode for w2 = −4 is marked with squares.
684
1
Rk = log 2 M b/dim (3)
K
|S |
RS = ∑R
k∈S
k =
K
log 2 M b/dim (4)
The coding gain achieved with side information available at the receiver, is given by:
10 log10 d s2 / d02
Γ dB/b/dim (5)
RS
Table 1. Comparison between the existing algorithm and the proposed algorithm (one side informa-
tion is known) with first row of the circulant encoding matrix and the gain.
Table 2. Comparison between the existing algorithm and the proposed algorithm (two side informa-
tion are known) with first row of the circulant encoding matrix and the gain.
685
M K = 4 K = 4
4 (1,1,-1,0) (-2,-2,-2,-1)
3.18 4.01
8 (1,0,3,3) (-4,-1,2,2)
4.62 5.35
16 (1,4,-6,-8) (-8,-5,-4,2)
6.02 6.02
32 (1,10,14,2) (-15,10,14,2)
5.80 5.80
64 (1,-26,20,30) (-31,-30,20,26)
6.08 6.08
where dS is the distance between the constellation points when either of the side information is
known to the receiver and d0 is the distance between any two adjacent points in the constellation.
In this section, we present the results for K = 2, 3, 4 and M = 4, 8, 16, 32, 64. With the help of
a computer search we find the best linear index codes which provide maximum coding gain.
The coding gains achieved by the code for one, two, and three side information are known,
and respectively, detailed in Tables 1, 2, and 3. The results presented in Tables substantiate
that the encoding matrix suggested by the proposed algorithm offers the comparable side
information gain compared to the existing work, with low computational complexity.
5 CONCLUSION
REFERENCES
Bar-Yossef, Z., Y. Birk, T. Jayram, & T. Kol (2011). Index coding with side information. IEEE Transac-
tions on Information Theory 57(3), 1479–1494.
Birk, Y. & T. Kol (2006). Coding on demand by an informed source (ISCOD) for efficient broadcast of dif-
ferent supplemental data to caching clients. IEEE/ACM Transactions on Networking 14(5), 2825–2830.
Effros, M., S. El Rouayheb, & M. Langberg (2015). An equivalence between network coding and index
coding. IEEE Transactions on Information Theory 61(5), 2478–2487.
Koetter, R. & M. Médard (2003). An algebraic approach to network coding. IEEE/ACM Transactions
on Networking 11(5), 782–795.
Mahesh, A.A. & B.S. Rajan (2016). Noisy index coding with PSK and QAM. arXiv preprint
arXiv:1603.03152.
Natarajan, L., Y. Hong, & E. Viterbo (2015a). Index codes for the gaussian broadcast channel using
quadrature amplitude modulation. IEEE Communications Letters 19(8), 1291–1294.
Natarajan, L., Y. Hong, & E. Viterbo (2015b). Lattice index coding. IEEE Transactions on Information
Theory 61(12), 6505–6525.
686
Nissan Kunju
Department of Electronics and Communication Engineering, TKM College of Engineering, Kollam, India
1 INTRODUCTION
Assistive devices for neurological rehabilitation, for example active prostheses, are controlled
by man machine interfacing. Nowadays myoelectric control is evolved as the most promising
approach to control devices utilized in clinical and commercial applications (Jiang et al.2012,
Fougner et al. 2012, Scheme & Englehart 2009). In spite of the fact that nerve and brain
recording are exceptionally encouraging for a direct neural interfacing, they often require
invasive methods for electrode placement that limits their practical applicability to labora-
tory research or small-scale clinical testing (Micerra & Navarro 2009). Although Industrial
developers like Otto Bock (Germany) and Touch Bionics (USA) have introduced surface
EMG based artificial limbs in the market, EMG based control is still in a premature state
being limited to few hand postures and higher EMG-channels required for effective control.
In this paper a scheme for classification of human hand grasps from surface EMG signals
is presented. The novelty of our approach dwelled on the use of EMD for feature extrac-
tion combined with Differential Evolution Based Feature Selection. The feature selection
framework that have been utilized in this study also gives a versatile approach to improve
the developed models comprehensibility by selecting the optimum feature subset adaptively
(Khushaba R. et al. 2008, Storn R. 2008, Ahmed Al-Ani et al. 2013). Our results prove that
the methodology of using EMD with DEFS can achieve significantly good results.
2 PROPOSED METHODOLOGY
The work presented in this paper stems from the desire to design a self-contained pros-
thetic system. For the laboratory stage of the work a standard PC installed with windows
687
3 ELECTROMYOGRAM ACQUISITION
The EMG data is recorded from eleven healthy subjects (aged between 20–30 years). Before
the start of the experiment, subjects were thoroughly familiarized with the experimental
protocol and the EMG equipment. A four channel generic EMG data acquisition system
CMCdaq is used to acquire the data at a sampling rate of 1000 Hz. The Ag/AgCl electrodes
were attached over the muscle belly in line with the muscle fibres in accordance with the
standard procedure in literature (Shrirao N.A. et al. 2009). The four surface EMG electrodes
were placed on forearm muscles Flexor Capri Ulnaris, Extensor Capri Radialis, Extensor
Digitorum, Flexor Digitorum Superficialis and ground electrode was placed onthe contralat-
eral upper limb. This follows the electrode placement discussed in (Frank F.H. 1989). Three
trials of the six different grasps, each with a duration of six second was performed and the
speed and force was intentionally left to the subject’s will. The Maximum Voluntary Isomet-
ric Contraction (MVIC) test was also executed by having the subject to flex and extend his/
her hand at the wrist joint by exerting maximum possible force to the maximum possible
inclination and sustaining it up to six seconds. The six basic hand grasps (Schlesinger G.
1919) and the experimental setup is shown in Fig. 1 and Fig. 2 respectively.
688
Figure 3. Illustration of evolution of the MAV values of EMG signal starting from rest along with the
threshold value (shown as dotted line) for spherical grasp.
Surface Electromyographic (sEMG) signals are usually affected by noise and in practice, the
acquired signal may be corrupted hence the raw signal need to be preprocessed before further
processing. The usable frequency range of EMG signal is considered to be in the range of
15–500 Hz as most of the energy is concentrated at this specified frequency range (De Luca
C.J 1998). The acquired signal is filtered using a 4th order butterworth bandpass filter with
high pass cut off frequency at 15 Hz and low pass cut off frequency at 500 Hz. After filtering
each channel is normalized using the MVIC obtained for each muscle.
For each grasp the level of involvement of each muscle will be different and a dynamic
threshold selection approach based on local characteristics of the signal is implemented
for offline analysis (Fig. 3). Sliding window approach is used to focus only on segments
where muscle is contracted. Keeping sliding window size as 50 ms, Mean Absolute Value
(MAV) is calculated for each window and once that value exceeds a threshold the muscle
is no longer considered to be in resting phase. In order to preserve the temporal informa-
tion, recordings were taken for processing on due activation of any one of the four chan-
689
Each grasping operation is characterized by a unique motor unit firing pattern (Kilbreath
et al. 2002, Boser B.E. et al. 1992), identifying and describing patterns that better discriminate
different classes of grasp over different trials. This is the core philosophy of feature extrac-
tion. The adaptive nature of decomposition and its ability to preserve the varying frequency
in time makes EMD a powerful choice in the analysis of EMG signals (Huang et al. 1998,
Andrade A.O. 2004, Flandrin P. 2004). In this study 10 most popular features (Ericka Janet
& Housheng Hu 2011) from time and frequency domain are used for pattern recognition and
these features are Mean Absolute Value, Variance, Kurtosis, Skewness, Slope Sign Change,
Waveform length, Zero Crossing, Mean Power Spectrum Density, Median Power spectrum
Density and Root Mean Square value (RMS). Fig. 4 depicts the raw EMG signal with cor-
responding IMF’s from flexor digitorum muscle during lateral grasp. The feature selection
framework (Khushaba R. et al. 2008, Storn R. 2008, Ahmed Al-Ani et al. 2013) employed in
this work is depicted in Fig. 5.
According to Christos Sapsanis et al. (Christos Sapsanis et al. 2013) incorporation of
features derived from first three IMF’s improves the overall recognition rate. However in
our study it is found that no significant contributions in terms of classification accuracy
is received from feature set derived from IMF’s beyond second decomposition level. Some-
times feature sets derived from higher order IMF’s seems to deteriorate the overall per-
formance hence ensemble of aforementioned features from raw signal and from the first
two IMF’s taken, this feature set is denoted as TDEMD. The proposed TDEMD method
is further validated using Discrete Wavelet Packet Transform (DWPT) utilizing energy of
wavelet coefficient at each node using Daubechies family of wavelets at 4 level of decom-
position, ensemble of features extracted from statistical and auto regressive modeling (TA)
and Discrete Wavelet Transform (DWT) utilizing Standard Deviation, Entropy, Waveform
Length and Energy of wavelet coefficients using Daubechies family of wavelets at 4 level of
decomposition.
Figure 4. EMG signal with corresponding IMF’s from flexor digitorum muscle during lateral grasp.
690
6 PATTERN RECOGNITION
There are several schemes based on artificial intelligence, statistical methods available for pat-
tern recognition. All these schemes are tested with mixed success. In order to assess the effects
of different pattern recognition algorithms on the performance of the proposed TDEMD
approach with different pattern recognition algorithms were tested in our study. Owing to
the wide acceptance in EMG based applications classifiers including Quadratic Discrimi-
nant Analysis (QDA) (Oskoei M.A. & Hu H. 2007), Support Vector Machine (LIBSVM)
(Oskoei M.A. & Hu H. 2008), K-Nearest Neighbor (KNN, k = 1) (Cover T.M. & Hart P.E.
1967), Extreme Learning Machines (ELM) (Huang et al. 2012) were utilised in this study.
The parameters of SVM are estimated by conducting grid search with cross validation.
Even though the feature selection algorithm is designed to select the optimum number of fea-
tures an ‘inner’ and ‘outer’ loop validation scheme is used to create a generalized result since the
algorithm used for feature selection is of wrapper feature selection which has dependency on the
training and testing samples. The outer loop performs the feature selection with randomly parti-
tioning the feature set into training and testing samples, 70% samples from feature set is chosen
as training samples and remaining as testing samples. This is repeated ten times and during each
iteration of the outer loop, the inner loop performs classification with selected feature subset
with 10-fold cross validation. The average results obtained are taken as the final result. This inner
and outer loop validation also helps to reduce the effect of initial population on the final result.
are considerably reduced using LIBSVM/ELM classifier. Among the four classifiers LIBSVM
is chosen to compare the performance of proposed TDEMD approach with other feature
extraction techniques. Though the overall results are promising it is noted that there is a subject
dependency in the recognition of each grasp which has to be further evaluated.
8 CONCLUSIONS
This paper presents a preliminary report made on behalf of an ongoing research to develop
a dexterous and natural control of powered upper limbs using EMG signals. The outcome of
this stage of our research proved that the methodology of using EMD with DEFS can achieve
significantly good results. The variation of grasp recognition rate among different subjects
uncovers the requirement of fine tuning of the algorithm. Detailed analysis will be further
carried out in future with a database created by involving more subjects including amputees.
692
Al-Ani, Ahmed. Alsukke, Akram., Khushaba, R. 2013. Feature subset selection using differential evo-
lution and a wheel based search strategy Swarm and Evolutionary Computation Volume 9, Pages
15–26.
Andrade, A.O., Kyberd, P.J., Nasuto, S. 2004. Time–frequency analysis of surface electromyographic
signals via Hilbert spectrum, in: S.H. Roy, P. Bonato, J. Meyer (Eds.), XVth ISEK Congress—An
Invitation to Innovation, Boston, MA, USA.
Boser, B.E., Guyon, I.M., and Vapnik, V.N. 1992. A training algorithm for optimal margin classifiers.5th
Annual ACM Workshop on COLT, Pittsburgh.
Cover, T.M., and Hart, P.E. Hart, 1967. Nearest neighbor pattern classification. IEEE Trans. Inform.
Theory, vol. IT-13, pp. 21–27.
De Luca, C.J. May 1998. The Use of Surface Electromyography in Biomechanics Journal of Applied
Biomechanics, Volume 13 Issue 2.
Ericka Janet Rechy-Ramirez and Huosheng Hu, Stages for Developing Control Systems using EMG
and EEG Signals: A survey, TECHNICAL REPORT: CES-513, ISSN 1744–8050.
Flandrin, P., Rilling, G. and Goncalv, P. 2004. Empirical mode decomposition as a filter bank. IEEE
Signal Process.Lett.vol. 11, no. 2, pp. 112–114.
Fougner, A., Stavdahl, O., Kyberd, P.J., Losier, Y.G., and Parker, P.A. Sep.2012. Control of upper limb
prostheses: Terminology and proportional myoelectric control—A review. IEEE Trans. Neural Syst.
Rehabil. Eng., vol. 20, no. 5, pp. 663–677.
Frank, F.H., 1989. Atlas of Orthopedic Anatomy. Ciba—Geigy, Switzerland.
Huang, G.-B., Zhou, H., Ding, X., & Zhang, R. 2012. Extreme learning machine for regression and
multiclass classification.IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics,
42(2), 513–529.
Huang, N.E., Shen, Z., Long, S.R., Wu, M.C., Shih et al.; March 1998. The Empirical Mode Decom-
position and the Hilbert Spectrum for Nonlinear and Non stationary Time Series Analysis. Royal
Society Proceedings on Math, Physical, and Engineering Sciences, Vol. 454, No. 1971 pp. 903–995.
Jiang, N., Dosen, Muller, K.R. and Farina, D. Sep. 2012. Myoelectric control of artificial limbs—Is
there a need to change focus?[In the spotlight] IEEE Signal Processing. vol. 29, no. 5, pp. 150–152.
Khushaba, R., Al-Ani, A., Al-Jumaily, 2008. A. Differential evolution based feature subset selection.
Proceedings of the International Conference Pattern Recognition (ICPR’08).
Kilbreath, S.L., Gorman, R.B., Raymond, J. and Gandevia, S.J. Distribution of the forces produced by
motor unit activity in the human flexor digitoriumprofundus 2002. Journal of Physiology, vol. 543,
no. 1, pp. 289–296.
Kunju, Nissan., Ojha, Rajdeep. R, Suresh. Devasahayam. 2013. A palmar pressure sensor for measure-
ment of upper limb weight bearing by the hands during transfers by paraplegics. Journal of Medical
Engineering and Technology, Vol. 37, No. 7, Pages 424–428.
Kunju, Nissan. Tharion, George. R, Suresh. Devasahayam, M. Manivannan. 2013. Muscle Activa-
tion Pattern and Weight Bearing of Limbs during Wheelchair Transfers in Normal Individuals—a
step towards Lower Limb FES Assisted Transfer for Paraplegics. Converging Clinical and Engi-
neering Research on NeuroRehabilitation, pp. 197–201. Biosystems and Biorobotics Series Springer
(doi:10.1007/978-3-642-34546-3_31).
Micera, S.and Navarro, X. Jan. 2009. Bidirectional interfaces with the peripheral nervous system. Int.
Review of Neurobiology. vol. 86, pp. 23–38.
Oskoei, M. A., & Hu, H. 2007. Myoelectric control systems—a survey. Biomedical Signal Processing and
Control, 2(4), 275–294.
Oskoei, M. A., & Hu, H. 2008. Support vector machine-based classification scheme for myoelectric
control applied to upper limb. IEEE Transactions on Biomedical Engineering, 55(8).
Sapsanis, Christos, Georgoulas, Georgoulas and Tzes. 2013. Anthony Tzes, EMG based classification
of basic hand movements based on time-frequency features. 21 st Mediterranean Conference on Con-
trol and Automation (MED).
Scheme, E. and Englehart, K.. 2011. Electromyogram pattern recognition for control of powered upper-
limb prostheses: State of the art and challenges for clinical use. Journal of Rehabilitation Research
Development.vol. 48, no. 6, p. 643.
Schlesinger, G. 1919. The mechanical construction of the artificial limb Verlag von Julius Springer, pp. 321–661.
Shrirao, N.A., Reddy, N.P. and Kosuri, D.R. 2009. Neural network committees for finger joint angle
estimation from surface Emg signals. Journal of Bio Medical OnLine, vol. 15, no. 2, pp. 529–535.
Storn, R. 2008. Differential evolution research—trends and open questions, in: U.K. Chakraborty (Ed.),
Advances in Differential Evolution SCI, vol. 143, Springer-Verlag, Berlin, Heidelberg, pp. 1–31.
693
The aggregated confusion matrix (Average of eleven subjects) illustrating the performance of
TDEMD method with different classifiers are attached below.
PREDICTED
S C L T P H
S 5728 68 32 0 22 56
T C 48 5508 36 0 33 22
R L 22 40 5570 106 76 44
U T 28 24 84 5696 56 18
E P 2 12 64 96 5578 28
H 88 48 32 14 44 5792
PREDICTED
S C L T P H
PREDICTED
S C L T P H
PREDICTED
S C L T P H
694
ABSTRACT: Agriculture has proven to be an arena in which technology has crucial roles
to play. Presently, agriculture automation has a wider scope with emerging trends such as
Controlled Environment Agriculture, Precision Farming etc. This paper describes the design
and implementation of an automated aquaponics system in the framework of Internet of
Things. Aquaponics is the integration of recirculating aquaculture with hydroponics. It is a
sensitive system in which several parameters are to be maintained at certain optimum values
inorder to ensure proper functioning of the system. The implemented automated aquaponics
system enables web-based remote monitoring of the important parameters via the ThingSpeak
IoT platform using Arduino Uno, ESP8266-01 and several sensors. Arduino-based control
of certain parameters is also possible in case of deviation from setpoint. In addition, there is
an SD-card data-logger used to store the data for future analysis.
1 INTRODUCTION
Integration of technology with agriculture has remarkably increased the ease and efficiency of
agriculture. Presently, technology-related agriculture is adopting newer dimensions such as Con-
trolled Environment Agriculture, Precision Farming (Mondal & Basu 2009) etc. These emerging
agricultural trends make use of various technological aspects such as WSN (Ojha et al. 2015),
IoT, Artificial Intelligence (Hashimoto et al. 2001, Lee 2000), control systems (De Baerdemaeker
et al. 2001) and so on with an aim to improve factors such as sustainability and food security.
Aquaponics is the integration of recirculating aquaculture with hydroponics in a single sys-
tem (Diver 2000). It is a form of Controlled Environment Agriculture and is thus a technology-
related agricultural practice. Aquaculture is the rearing of fish in controlled conditions whereas
hydroponics involves soilless growth of plants. Thus, the combination of both the techniques
into a single system enables the organisms to benefit mutually wherein the plants absorb the
required nutrients and the fishes are provided with purified water. The aquaculture effluent
consists of ammonia which is toxic to fish. The water from the aquaculture tank is pumped
to a grow-bed (which serves as the substrate for plant growth) and recirculated back into the
aquaculture system with the help of a siphon. During the circulation, the water is subjected to
a two-step nitrification process (Somerville et al. 2014) in which Nitrosomonas bacteria converts
ammonia into nitrite which is then converted into nitrate, an absorbable nutrient for plants, by
Nitrobacter bacteria (Klinger & Naylor 2012). The nitrate is absorbed by the plants (Buzby &
Lin. 2014) and the filtered water is recirculated back into the aquaculture tank (Graber & Junge
2009, van Rijn 2013) with the help of a flood and drain mechanism operated by a siphon.
For the aquaponics system to be properly balanced, several parameters should be maintained
at certain optimum values. The important parameters include temperature, humidity, light
intensity, water level in the aquaculture tank, pH, nitrate and ammonia content, dissolved
oxygen level etc. Regular manual monitoring and control of such parameters is a difficult
task for the farmers (Goddek et al. 2015). The requirement of an automated aquaponics
system (Saaid et al. 2013) lies in this aspect.
This paper describes the design and implementation of an automated aquaponics system
in the framework of Internet of Things. The implemented system enables remote monitoring
695
696
Figure 3. Overall system architecture of automated aquaponics system: IoT-based monitoring system
and Arduino-based control.
3 SYSTEM DESIGN
The hardware and software design of the implemented system is briefly described in this
section.
697
3.1.2 Sensors
Six parameters of the aquaponics system are monitored, namely ambient light intensity,
ambient temperature, relative humidity, moisture content in the grow-bed, level and tempera-
ture of water in the aquaculture tank. Five different sensors are used for this purpose. LDR
(Light Dependent Resistor) is used to measure the ambient light intensity. DHT11 is the
sensor used for measuring the ambient temperature (in degree celsius) and relative humidity
(in percentage). The grow-bed moisture sensor consists of two probes which act as a variable
resistor depending upon the moisture content of the grow-bed media. Ultrasonic level sensor
HCSR04 is used for measuring the level of water in the aquaculture tank. The temperature of
water in the aquaculture tank is measured using DS18B20 waterproof temperature sensor.
3.2.2 ThingSpeak
ThingSpeak is an open-source IoT platform used for real-time monitoring purposes. ThingSpeak
channels can be created by logging into ThingSpeak using MathWorks account. The data
from the sensors get stored in ThingSpeak channels in various fields (Pasha 2016, Rao & Ome
2016). The data is displayed in the form of charts. Certain details are required to be entered
in the Arduino IDE sketch inorder to send sensor data from Arduino to ThingSpeak using
ESP8266-01 Wi-Fi module. These include the write API key of ThingSpeak channel, Thing-
Speak IP, the SSID and password of the Wi-Fi network to be accessed and HTTP GET request.
698
Real-time monitoring of the parameters of the automated system is possible from any remote
location by logging into the ThingSpeak channel created using Mathworks account. The link to
the ThingSpeak login page is https://thingspeak.com/login. ThingSpeak channel consists of fields
corresponding to each parameter. The measured readings are displayed in the form of charts.
Figure 4 shows the real-time graphical display of measured parameters as obtained
from the ThingSpeak channels. The y-axis of all the charts are labelled with the parameter
concerned and the x-axis is labelled as date with the time at which reading is taken.
The status of actuation of the arduino-based level controller can also be monitored
using ThingSpeak channel. Two fields are created corresponding to each relay of the two
channel relay module. When a relay turns ON, the chart corresponding to the particular relay
displays 1. When the relay turns OFF, the status displayed in the chart changes to zero.
Figure 4. Fields of ThingSpeak channel showing the real-time values of measured parameters against
corresponding time.
Figure 5. Plot of measured parameters aginst time from data stored in microSD card using SD-card
datalogger.
699
The proposed automated system can be modified by including other sensors such as dissolved
oxygen, pH and ammonia sensors. The same methodology can be adopted to other Controlled
Environment Agriculture practices such as Greenhouse, Hydroponics and Aquaculture. Auto-
mation can also be carried out using Wireless Sensor Networks employing Raspberry Pi and
zigbee protocol (Ferdoushi & Li 2014) wherein image processing techniques can be applied for
analysis of effects of various environmental factors on plant growth (Liao et al. 2017). Image
processing techniques can also be applied for weed detection. Intelligent controllers utilizing
Artificial Neural Networks and Expert Systems can also improve the performance of Control-
led Environment Agriculture systems to a greater extent (Hashimoto et al. 2001, Lee 2000).
The scope of technology related agricultural practices is increasing due to their sustainable
nature and efficient utilization of resources such as land, water etc. Application of technological
aspects enable the farmers to adopt better farming strategies and thereby increase the food pro-
ductivity. These methods are thereby capable of ensuring sustainability as well as food security.
REFERENCES
Buzby, K. & L.-S. Lin. (2014). Scaling aquaponic systems: Balancing plant uptake with fish output.
Aquacultural Engineering 63, 39–44.
De Baerdemaeker, J., A. Munack, H. Ramon, & H. Speckmann (2001). Mechatronic systems, commu-
nication, and control in precision agriculture. IEEE Control Systems Magazine, 48–70.
Diver, S. (2000). Aquaponics integration of hydroponics with aquaculture. Technical report, ATTRA, NCAT.
Ferdoushi, X. & X. Li (2014). Wireless sensor network system design using raspberry pi and arduino for
environmental monitoring applications. Procedia Computer Science 34, 103–110.
Goddek, S., B. Delaide, U. Mankasingh, K. Ragnarsdottir, H. Jijakli, & R. Thorarinsdottir (2015).
Challenges of sustainable and commercial aquaponics. Sustainability 7, 4199–4224.
Graber, A. & R. Junge (2009). Aquaponic systems: Nutrient recycling from fish wastewater by vegetable
production. Desalination 246, 147–156.
Hashimoto, Y., H. Murase, T. Morimoto, & T. Torii (2001). Intelligent systems for agriculture in japan.
IEEE Control Systems Magazine, 71–85.
Klinger, D. & R. Naylor (2012). Searching for solutions in aquaculture: Charting a sustainable course.
Annual Review of Environment and Resources 37, 247–276.
Lee, P.G. (2000). Process control and artificial intelligence software for aquaculture. Aquacultural Engi-
neering 23, 13–36.
Lennard, W. & B. Leonard (2006). A comparison of three different hydroponic sub-systems (gravel bed,
floating and nutrient film technique) in an aquaponic test system. Aquacult. Int. 14, 539–550.
Liao, M., S. Chen, C. Chou, H. Chen, S. Yeh, Y. Chang, & J. Jiang (2017). On precisely relating the
growth of phalaenopsis leaves to greenhouse environmental factors by using an iot-based monitoring
system. Computers and Electronics in Agriculture 136, 125–139.
Mondal, P. & M. Basu (2009). Adoption of precision agriculture technologies in India and in some
developing countries: Scope, present status and strategies. Progress in Natural Science 19, 659–666.
Ojha, T., S. Misra, & N.S. Raghuwanshi (2015). Wireless sensor networks for agriculture: The state-of-
the-art in practice and future challenges. Computers and Electronics in Agriculture 118, 66–84.
Pasha, S. (2016). Thingspeak based sensing and monitoring system for iot with matlab analysis. IJNTR
2, 19–23.
Rao, S. & N. Ome (2016). Internet of things based weather monitoring system. IJARCCE 5, 312–319.
Saaid, M., N. Fadhil, M. Ali, & M. Noor (2013). Automated indoor aquaponic cultivation technique.
Proc. of 2013 IEEE 3rd International Conference on System Engineering and Technology., 285–289.
Somerville, C., M. Cohen, E. Pantanella, A. Stankus, & A. Lovatelli (2014). Small-scale aquaponics food
production. Technical report, Food and Agricultural Organization of the United Nations, Rome.
van Rijn, J. (2013). Waste treatment in recirculating aquaculture systems. Aquacultural Engineering 53, 49–56.
700
K. Athira, K. Brijmohan, Varun P. Gopi, K.K. Riyas, Garnet Wilson & T. Swetha
Department of Electronics and Communication Engineering, Government Engineering College, Wayanad, India
1 INTRODUCTION
Optical coherence tomography (OCT) is a non-invasive imaging technique that provides high
resolution images of tissue structures and cross-sectional imaging of many biological systems
(Drexler & Fujimoto, 2008). The OCT imaging technique is widely used by ophthalmologists
in the diagnosis of eye disease such as Glaucoma, Macular Edema and Diabetic Retinopathy.
Now-a-days this imaging technique is also used in the detection of skin disorders.
The working of the OCT imaging technique is based on the Michelson Interferometer,
using Low Coherence Interferometry (Schmitt et al., 1999). Typically, near-infrared laser
light is used as a light source to penetrate into the scattering medium, before capturing
the backscattered optical waves. Due to heat produced by the image sensors, or due to
the physical properties of light photons the image is corrupted during the image acquisi-
tion process. The light reflected from the micro-structural tissue contains the features of
the image. The combination of various crests and troughs of backscattered light waves
from the tissue produces granular structures in an image. This grainy representation is
known as speckle noise. It can change the important details in an image used to diagnose
disease. Therefore leads to image quality degradation. This degradation makes it difficult
for humans to differentiate pathological tissues from the normal tissues. Speckle noise is a
multiplicative noise which contains information about the image. Therefore it is difficult to
remove speckle noise without any change in important features in an image. The primary
aim of the OCT research is to denoise the speckle noise and preserve the edges clearly.
Several filtering methods are proposed for reducing speckle noise. The limitation is that
filtering techniques remove some parts of the information in an image along with speckle
noise. This paper presents a comparative study on the performance of different filters.
2 NOISE MODEL
Speckle noise can be modeled as multiplicative noise. It is known to have a Gamma distribu-
tion. It is a granular type noise which appears in the lighter regions of the image as bright
specks. Speckle noise can be modeled as:
701
where Y, S and N represent the noisy image, signal and speckle noise, respectively. A logarith-
mic transformation is applied to the image data to change the multiplicative nature. Then the
model can be rewritten as:
f(x,y) = s(x,y) + e(x,y)
where f, s and e represent the logarithm of the noisy image, signal and noise respectively.
3 RELATED WORKS
3.1 Nonlocal means denoising filter with double Gaussian anisotropic kernels
The Non-local means (NLM) filter is one of the important denoising filters (Aum et al., 2015).
It is a denoising algorithm which utilizes the presence of similar features in an image and then
takes the average of those features to remove speckle noise in an OCT image. This method
provides a low signal-to-noise ratio due to the low performance of noise reduction around the
edges of an image. To overcome this limitation, the conventional NLM filter converts into an
NLM filter with double Gaussian anisotropic kernels. The conventional NLM filter contains
a Gaussian kernel to measure the similarities in an image. It may be able to measure the dis-
tinct similar features from the image. Since the same Gaussian kernel is used on every pixel,
the speckle noise corrupted edges cannot be denoised correctly. Therefore, the new algorithm
proposes new kernels and their shapes are adaptively varied. Various kernels were used for cal-
culating the similarity between the local neighborhoods from the pixel positions. The modified
NLM method produced a PSNR of 31.01db. Figures 1(a) and 1(b) show the denoised images,
using the conventional NLM filter and the modified NLM filter, respectively.
Figure 1. OCT images obtained from a human index fingertip: (a) image processed with the conven-
tional NLM: - (b) image processed with the modified NLM.
702
where T is the thresholding value. Thresholding is applied on an OCT image and performing
inverse wavelet transform.
703
1
Iˆ ( m ) = ∑ wD ( m, n) wR ( m, n) I ( n),
Zn ∈N (m)
The weight function WD is related to the domain filter which provides larger weight to
pixels that are spatially close to the center pixel. Similarly, the other weight function WR
704
m−n2
wD ( m, n) = exp −
2σ d2
I −I 2
wR ( m, n) = exp − m 2 n
2σ r
where Im and In are the intensities at m and n, respectively. The σd is the geometrical spread
in the domain which calculates the blurring effect in an image. Then the value of σd is inde-
pendent to noise. Similarly, the σr is the photometric spread in the image range and its opti-
mal value is linearly proportional to the noise standard deviation σ. The photometric spread
in the range filter is a constant value. This is one of the main disadvantages of the bilateral
filter, as fixing the optimal value is difficult. In order to enhance the sharpness of an image,
the bilateral filter needs some modifications. It contains two modifications:- an offset is intro-
duced to the range filter, and the width of the range filter varies in an adaptive manner.
4 CONCLUSION
The speckle noise reduction algorithms are described in this paper. Among filters used for
despeckling in optical coherence tomography images, the bilateral filtering technique is
efficient. This technique eliminates a significant amount of noise and preserves the edges of
the denoised image. Adaptive filter methods are more efficient than filtering as they preserve
the fine details of edges.
REFERENCES
[1] W. Drexler, J.G. Fujimoto, “Optical Coherence Tomography: Technology and Applications”,
Springer International Publishing, Switzerland, 2015.
[2] J.M. Schmitt, S.H. Xiang, and K.M. Yung, “Speckle in Optical coherence Tomography: An Over-
view” J. Biomed. Opt. 4 (1) (1999) 95–105.
[3] Jaehong aum, Ji-hyun kim, and Jichai jeong, “Effective speckle noise suppression in optical coher-
ence tomography images using nonlocal means denoising filter with double Gaussian anisotropic
kernels” published 3 April.
[4] Desmond C. Adler, Tony H. Ko, and James G. Fujimoto “Speckle reduction in optical coherence
tomography images by use of a spatially adaptive wavelet filter”, December 15, 2004/Vol. 29,
No. 24/OPTICS LETTERS.
[5] Markus A. Mayer, Anja Borsdorf, Martin Wagner, Joachim Hornegger, Christian Y. Mardin, and
Ralf P. Tornow, “Wavelet denoising of multiframe optical coherence tomography data”, 1 March
2012/Vol. 3, No. 3/BIOMEDICAL OPTICS EXPRESS.
[6] Rajesh Mohan R., S. Mridula, P. Mohanan, “Speckle Noise Reduction in Images using Wiener
Filtering and Adaptive Wavelet Thresholding”, 978-1-5090-2597-8/16/$31.00_c 2016 IEEE.
[7] Ch. Ravi Kumar, Member, IACSIT and S.K. Srivatsa, “Enhancement of Image Sharpness with
Bilateral and Adaptive Filter”, International Journal of Information and Education Technology,
Vol. 6, No. 1, January 2016.
[8] A. Ozcan, A. Bilenca, A.E. Desjardins, B.E. Bouma, G.J. Tearney, “Speckle reduction in optical
coherence tomography images using digital filtering,” J. Opt. Soc. Am. A 24 (7) (2007) 1901–1910.
[9] V. Frost, J. Stiles, K. Shanmugan, J. Holtzman, “A model for radar images and its application to
adaptive digital filtering of multiplicative noise,” IEEE Trans. Pattern Anal. Mach. Intell. (PAMI-4)
(2) (1982) 157–166.
[10] T. Loupas, W. McDicken, P. Allan, “An adaptive weighted median filter For speckle suppression in
medical ultrasonic images,” IEEE Trans. Circuits Syst. 36 (1) (1989) 129–135.
705
ABSTRACT: This paper compares generic object detection method using three different
feature extraction schemes. The query image could be of different types such as a real image
or a hand-drawn sketch. The method operates using a single example of the target object.
The feature descriptors emphasizes the edge parts and their distribution structures, so it
is very robust and can deal with virtual images or hand-drawn sketches. The approach is
extended to account for large variations in rotation. Good performance is demonstrated on
several data sets, indicating that the object was successfully detected under different imaging
conditions.
1 INTRODUCTION
In image processing, it is very important to analyze the visual objects in the image. Object
detection means, to locate object of a certain class in a test image. This method uses only
one query image as the template to detect the object without any training procedures. Such
systems are applicable in different areas such as surveillance, video forensics and medical
image analysis and so on.
The training free object detection with one query image has many applications such as
automatic passport control at airports, where a single photo in the passport is the only
example available. Another application is the image retrieval from the Web. In this case, only
a single sample of the target is provided by the user and every database is compared with this
single sample. Another application is for the classification of an unknown set of images into
one of the training classes.
2 LITERATURE REVIEW
There are different methods for object detection. Most of them are based on training process.
But the training-based methods are subject to sample restrictions. In cases such as frontal
face detection, samples can accurately get aligned. But, in many cases, the collecting and
aligning of samples is not possible which badly affects the performance of training-based
detection methods. And the training method is not suitable for immediate task, because the
collection of samples and training the model should be completed in advance. So when the
target class changes, they must be redone.
707
3 OVERVIEW
Object detection means, to locate any object of a particular class in a test image. This method
uses only one query image as the template to detect the object without any training proce-
dures. As shown in Figure 1, the query image should be a typical sample of the target class,
containing only one object and as little background as possible. It can be a real image, a vir-
tual image from a simulation model or even a hand-drawn sketch which only exhibits a rough
profile of the object. The detection task is very similar to the template matching process. The
query image is used as a standard template and the test images are matched to this template
to recognize the objects.
708
The test image T, is divided into overlapping patches, Ti, which have the same size as Q.
Then the features of query image and the test image patches are extracted. These query image
is compared with each of the test image patches and the most silimar patch is decided.
4 THEORY
The main steps of the object detection method are feature extraction, dimensionality reduc-
tion, similarity measurement and decision making, as shown in Figure 2. Feature extraction
is the important step in the object detection process. The proposed method use three different
feature extraction methods-Dense Scale Invariant Feature Transform (DSIFT), GIST and
Speed UP Robust Features (SURF). Principal Component Analysis (PCA) is used to reduce
the dimensionality of the features. Euclidean distance and Matrix Cosine Similarity (MCS)
are used for similarity measurement. Decision process is based on minimum Euclidean dis-
tance or maximum MCS value.
709
image is then split into a grid on several scales, and the response of each cell is computed
using a series of Gabor filters. All of the cell responses are concatenated to form the fea-
ture vector.
3. SURF: SURF descriptors can be used to locate and recognize objects, people or faces, to
reconstruct 3D scenes, to track objects and to extract points of interest. SURF is a fast
feature extraction method based on the integral image and Hessian matrix and it is par-
tially inspired from SIFT. To detect interest points, SURF uses an integer approximation
of the determinant of Hessian blob detector. Its feature descriptor is based on the sum of
the Haar wavelet response around the interested points.
SURF uses multi-resolution pyramid technology to convert images into coordinates to
copy the original image with pyramid-shaped Gaussian or Laplacian pyramid shapes to
obtain an image with the same size but reduced bandwidth. Thus achieves a special blur-
ring effect on the original image, called Scale-Space and ensures that the interested points
are invariant to scale.
FQ FTi
( )
ρ = ρ FQ , FTi = ,
|| FQ || || FTi ||
(1)
710
The patch with minimum Euclidean distance will be detected as the result.
In order to demonstrate the identification performance, the system was tested for car detec-
tion, face detection, generic object detection and rotated object detection. For rotated object
detection, generic objects were implanted in the background images.
711
Feature
extraction Generic
method Car Face object
The overall performance analysis of the system is shown in Table 2. In this analysis
hand-drawn sketch is used as the query image and Cosine Similarity is used for distance
measurement.
The method was tested using different objects. The query image can be a real image or a
hand-drawn image of the object with a little background as possible. The results show that
the approach is quite stable and is not affected by the choice of the query images. Even the
performance with the hand-drawn sketch is quite good. Out of the three feature extraction
methods GIST gives better detection rate. In the case of rotated object SURF gives good
performance than others.
6 CONCLUSION
Object detection refers to find the position of a particular object in a given image. There are
many object detection methods, mostly based on the training process. The target object for
713
REFERENCES
[1] Bin Xiong and Xiaoqing Ding, “A Generic Object Detection Using a Single Query Image Without
Training”, Tsinghua Science and Technology, April 2012, 17(2): 194–201.
[2] Mikolajczyk K. and Schmid C., “A performance evaluation of local descriptors”, IEEE Trans.
Pattern Analysis and Machine Intelligence, 2005, 27(10): 1615–1630.
[3] Jurie F. and Triggs B., “Creating efficient codebooks for visual recognition”, In: Proceedings of
IEEE International Conference on Computer Vision. Beijing, China, 2005.
[4] Seo H, Milanfar P., “Training-free, generic object detection using locally adaptive regression kernels”,
IEEE Trans. Pattern Analysis and Machine Intelligence, 2010, 32(9).
[5] Shechtman E. and Irani M., “Matching local self-similarities across images and videos”, In:
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2007.
[6] Ferrari V., Tuytelaars T. and Van Gool L., “Object detection by contour segment networks”, In:
Proceedings of European Conference on Computer Vision. Graz, Austria, 2006.
[7] Aude Oliva and Antonio Torralba, “Building the gist of a scene: the role of global image features in
recognition”, Progress in Brain Research, 2006, Vol. 155, ISSN 0079-6123.
[8] Ivan Sikiric, Karla Brkic and Sinisa Segvic, “Classifying traffic scenes using the GIST image
descriptor”, Proceedings of the Croatian Computer Vision Workshop, 2013.
[9] Bay H., Tuytelaars T. and Van Gool L., “Surf: Speeded up robust features”, Computer Vision,
Springer Berlin Heidelberg, pp. 404 417, 2006.
[10] Baofeng Zhang, Yingkui Jiao, Zhijun Ma, Yongchen Li and Junchao Zhu, An Efficient Image
Matching Method Using Speed Up Robust Features, Proceedings of 2014 IEEE International
Conference on Mechatronics and Automation August 3–6, Tianjin, China.
[11] K. Murphy, A. Torralba, D. Eaton and W. Freeman, Object detection and localization using local
and global features, Towards Category-Level Object Recognition, 2005, Vol. 4170: 382400.
[12] Ivan Sikiric, Karla Brkic, Sinisa Segvic, Classifying traffic scenes using the GIST image descriptor,
Proceedings of the Croatian Computer Vision Workshop, 2013.
[13] Agarwal S, Awan A, Roth D, Learning to detect objects in images via a sparse, part-based
representation, IEEE Trans. Pattern Analysis and Machine Intelligence, 2004, 26(2): 1475–1490.
[14] Rowley H, Baluja S, Kanade T., Neural network-based face detection, IEEE Trans. Pattern Analysis
and Machine Intelligence, 1998, 20(1): 22–38.
714
ABSTRACT: A major issue in the post-harvest phase of the fruit production sector is the
systematic determination of the maturity level of fruits, such as the ripeness of watermel-
ons. Maturity assessment plays an important role while sorting in packing houses during
the export. This paper proposes a support vector machine-based method for the automated
non-destructive classification of watermelon ripeness by acoustic analysis. Acoustic samples
are collected from ripe and unripe watermelons in a studio environment by thumping on the
surface of watermelons. Sound samples are pre-processed to remove silence regions by fixing
an energy threshold. Pre-processed sound signals are segmented into equal-length frames
sized 200 ms, and Teager Energy Operator (TEO)—based features are extracted. The entire
set of audio samples are divided into a training set with 60% of the total audio samples and
the remaining 40% for testing. A support vector machine—based classifier is trained with
features extracted from the training set. Twenty dimensional feature vectors are computed in
the feature extraction phase and fed into the classification phase. The results show that the
proposed TEO-based method was able to discriminate between ripe and unripe watermelons
with overall accuracy of 83.35%.
1 INTRODUCTION
During the past decade, the inspection of quality and maturity of fruits and vegetables in
harvest and postharvest conditions is highly demanding in the fruit production industry.
Systematic determination of maturity assessment plays an important role in sorting. The
need for automated non-contact techniques in sorting and grading is in high demand by the
industry. Several techniques to measure firmness and quality have been listed in Abbott et al.
(1968); Chen et al. (1993). Usually farmers identify the maturity and quality levels of fruits
using certain indices, such as the number of days after full bloom, flesh color, thumping
sound, fruit shape, fruit size and skin color. Traditional methods may have their own limita-
tions. For example, judging watermelon ripeness, using its apparent properties such as size
or skin color, is very difficult due to its thick skin. The most common way by which people
used to determine the watermelon ripeness was by tapping the skin of the melon and then
judging the ripeness using the reflected sound. If the sound is dense, then the watermelon is
under-ripe, while if the sound is hollow, then the watermelon is ripe. Examples of a ripe and
unripe watermelon are shown in the Figure 1. The use of automated inspection of fruits and
vegetables has increased in recent decades to achieve higher quality sorting before packaging.
Acoustic features are widely used in many applications in day-to-day life Ayadi et al.
(1995); Piyush et al. (2016). In the study of Miller and Delwiche Miller and Delwich (1989),
spectral information and machine vision were used for bruise detection on peaches and
apricots. Hyper-spectral imaging for detecting apple bruises was investigated by Xing and
De Baerdemaeker Xing and De Baerdemaeker (2005). In Abbaszadeh et al. (2011), a non-
destructive method for quality test using Laser Doppler Vibrometery (LDV) technology is
presented. By means of a Fast Fourier Transform (FFT) algorithm and by considering the
715
response signal to excitation signal ratio, vibration spectra of fruit are analyzed to classify
ripeness. In this paper, we propose a method based on the Teager Energy Operator (TEO)
and support vector machines. In the proposed work the Teager energy-based features give
results par with other nondestructive techniques Baki et al. (2010); Diezma-Iglesias et al.
(2004). The rest of the paper is organized as follows. Section 2 discusses the theory of the
Teager energy operator and support vector machines in detail. Section 3 discusses the experi-
mental description of the proposed method along with the description of the dataset. Sec-
tion 4 presents results with analysis, followed by conclusion in Section 5.
2 THEORETICAL BACKGROUND
in the continuous case (where x means the first derivative of x, and x means the second
derivative). In the proposed work, a non-speech application of the TEO feature is demon-
strated. The steps to compute a TEO-based feature are shown in Figure 2. Acoustic samples
are collected by thumping on the surface of watermelon, as shown in Figure 3. Hamming
windowed audio frame is transformed to the frequency domain using an FFT algorithm, and
power spectrum S(i) is computed, followed by a TEO transform, resulting in:
ψ [S (i )] = S 2 (i ) − S (i + 1)(i − 1) (2)
A Mel-scale filter bank is used to filter the spectrum obtained from the TEO processing.
Each filter in the filter bank is a triangle bandpass filter, Hm, which tries to imitate the fre-
quency resolution of the human auditory system Hui et al. (2008).
The outputs of the filter bank are obtained by,
Pm = ψ [S (i ) ⋅ H m (i ) m = 1, 2,… M ] (3)
716
Figure 4. (a) Thumping sound collected from watermelon; (b) TEO features extracted for one
acoustic sample.
where M is the number of filter banks. Then log compression is applied to the filter bank
output. Finally Discrete Cosine Transform (DCT) applied in order to compress the spectral
information into the low-order coefficients. The feature vector obtained is expressed by:
M
k (i − 0.5)
TEOCEP ( k ) = ∑ cos ⋅ π ⋅ log( Pi ). (4)
i =1 M
Thumping sound collected from a riped watermelon and its TEO feature are shown in
Figures 4(a) and (b) respectively.
717
3 EXPERIMENTAL DESCRIPTION
The confusion matrix of the proposed classification system is given in Table 2. The proposed
method results in an overall classification accuracy of 83.35% with a 60: 40 (training: test-
ing) pattern. The experimental results proved that the proposed TEO-based system outper-
forms the other acoustic-based methods cited in the literature. When the system identified
all the riped watermelons correctly, 66.7% is the system accuracy for the unripe cases. The
experimental results show that TEOCEP features are very effective in capturing human percep-
tion sensitivity and energy distribution, which is more important for discriminating ripe and
unripe watermelons.
718
RIPE 10 0 8 0
UNRIPE 0 13 2 4
5 CONCLUSION
REFERENCES
719
720
ABSTRACT: In this paper, the fusion of a modified group delay feature and a frame slope
feature is effectively utilized to identify the most predominant instrument in polyphonic
music. The experiment is performed on a subset of the Instrument Recognition in Musical
Audio Signals (IRMAS) dataset. The dataset consists of polyphonic music, with predominant
instruments such as acoustic guitar, electric guitar, organ, piano and violin. In the classifica-
tion phase, a Gaussian Mixture Model (GMM)—based classifier makes the decision based
on the log-likelihood score. The results show that a phase-based feature modified group
delay works more effectively than magnitude spectrum based features, such as Mel-frequency
cepstral coefficients and frame slope. The classification accuracy of 57.20% is reported from
the fusion experiment. The proposed system demonstrates the potential of the fusion of
features in recognizing the predominant instrument in polyphonic music.
1 INTRODUCTION
Music Information Retrieval (MIR) is a growing field of research with lots of real-world
applications, and is applied well in categorizing, manipulating and synthesizing music. Music
Information Retrieval (MIR) mainly focuses on the understanding and usefulness of music
data through the research, development and application of computational approaches and
tools. Automatic instrument recognition, one of the MIR tasks, has a wide range of applica-
tions ranging from source separation to melody extraction. In Computational Auditory Scene
Analysis (CASA), musical instrument recognition and sound source recognition play a vital
role. Predominant instrument recognition refers to the problem where the prominent instru-
ment is identified from a mixture of instruments playing together. In the literature, we could
see that the instrument recognition by source separation is widely studied in many music infor-
mation retrieval applications. Since there are numerous approaches for source separation, such
as polyphonic pitch estimation in Klapuri (2001) or the separation of concurrent harmonic
sounds, it can act as a front-end for the successive monophonic recognition task.
2 RELATED WORK
In the literature, numerous attempts in instrument recognition have been reported for mono-
phonic or polyphonic audio files. Features derived from a Root-Mean-Square (RMS) energy
envelope via Principle Component Analysis (PCA) can be seen in Kaminsky and Materka
(1995). In another approach Eronen and Klapuri (2000), cepstral coefficients combined with
temporal features are used to classify 30 orchestral instruments with several articulation styles.
The group delay-based feature has also been used for automatic instrument recognition in
721
The block diagram of the proposed system is shown in Figure 1. The experiment is conducted
in four phases. In the first phase, a baseline system with MFCC features are used, followed
by phases with frame slope, modified group delay feature and early fusion of these features.
In all the phases, GMM—based classifiers are used. Sixty-four mixture GMMs are trained
for all instrument models, using audio files in an isolated environment. For each instrument
model, the likelihood score is computed for the test audio file and the model which reports
maximum log-likelihood is declared as the decision. It can be formulated mathematically as
finding the target – λi for which the following criteria is satisfied.
M −1
arg max ∑ log p (Om / λi ) (1)
1≤ i ≤ R
m=0
where Om, λi, M, R, represent the feature vectors, GMM model for an instrument, number of
feature vectors, and the total number of instrument models, respectively.
4 FEATURE EXTRACTION
Figure 1. The proposed system. λ1 λ2 … λn represent Gaussian mixture models for isolated instruments.
722
τ ( e jω ) =
( ) ( ) ( ) ( )
X R e jω YR e jω + YI e jω X I e jω
(2)
X (e )
2
jω
where the subscripts R and I denote the real and imaginary parts, respectively. X(ejω) and Y(ejω)
are the Fourier transforms of x[n] and nx[n] respectively. The denominator is replaced by its
spectral envelope to mask the spiky nature. The modified group delay function (MODGD)
τm(ejω) is obtained as:
τ m ( e jω ) =
( ) ( ) ( ) ( )
X R e jω YR e jω + YI e jω X I e jω
(3)
S (e )
2γ
jω
where S(ejω) is the cepstrally—smoothed version of X(ejω). The group delay function and
modified group delay function for the speech frame are shown in Figures 2(a) and (b),
respectively. Modified group delay functions are converted to spectra using DCT, as shown
in Hegde (2005). In the proposed experiment, 13-dimensional modified group delay features
(MODGDF) are computed from the test and target audio files.
Figure 2. (a) Group delay functions computed for a speech frame; (b) Modified group delay functions
computed for the frame in (a).
723
5 PERFORMANCE EVALUATION
5.1 Dataset
Instrument Recognition in Musical Audio Signals (IRMAS) dataset consists of more than 6,000
musical audio excerpts from various styles with annotations of the predominant instrument
present. They are excerpts of three seconds for 11 pitched instruments. In the proposed experi-
ment, initially Gaussian mixture models are built using monophonic instrument wave files with
1,000 files per instrument. We considered five classes, namely acoustic guitar, electric guitar,
organ, piano and violin. In the testing phase, 250 files of five classes (50 each) are tested against
these models. All audio files are stored in 16-bit stereo WAV format, sampled at 44.1 kHz.
The studies conducted by Diment et al. (2013) for monophonic instrument recognition,
motivated to focus on system characteristics in the proposed polyphonic experiment. From
the literature, it can be seen that conventional MFCC features are widely used for timbre
analysis. But, as mentioned earlier, frame slope features have already been proven to be more
effective in emphasizing system characteristics. The mapping of the MFCC and frame slope
feature set into a two-dimensional feature space using PCA is shown in Figure 3. It is worth
noting that classes are better separated in frame slope feature space than in the MFCC.
Figure 3. 2-dimensional mapping of (a) MFCC features; (b) Frame slope features.
724
A. Guitar 35 6 5 3 1 70.0
E. Guitar 8 14 15 1 12 28.0
Organ 24 7 7 9 3 14.0
Piano 8 0 1 41 0 82.0
Violin 0 0 0 6 42 84.0
A. Guitar 32 3 1 10 4 64.0
E. Guitar 2 37 1 6 4 74.0
Organ 4 15 8 13 10 16.0
Piano 5 3 0 38 4 76.0
Violin 28 2 0 0 20 40.0
A. Guitar 21 26 1 0 2 42.0
E. Guitar 1 44 1 0 4 88.0
Organ 1 25 15 5 4 30.0
Piano 1 18 3 28 0 56.0
Violin 3 11 0 1 35 70.0
1 MFCC 43.20
2 Frame Slope 54.00
3 MODGDF 55.60
4 MODGDF + Frame Slope 57.20
The confusion matrices for frame slope, MODGD and fusion experiments are shown in
Tables 1–3. The overall accuracy is reported in Table 4. From the experiments, we observed
that frame slope features and MODGD feature are giving complementary information. So,
finally we combined the feature set and conducted the experiment.
The results show that the baseline system reports an overall accuracy of 43.2%. While
the fusion of features improved the individual classification accuracy for electric guitar and
organ, it deteriorated the performance for other classes. It is worth noting that overall results
improved, when considering the experiment as a whole. The experimental results demonstrate
the potential of the modified group delay feature in recognizing the predominant instrument
in polyphonic music and related applications.
7 CONCLUSION
725
REFERENCES
Diment, A., P. Rajan, T. Heittola, & T. Virtanen (2013). Modified group delay feature for musical
instrument recognition. In Proceedings of 10th Int. Symp. Comput. Music Multidiscip. Res., Marseille,
France, 2013, 431–438.
Eronen, A. & A. Klapuri (2000). Musical instrument recognition using cepstral coefficients and
temporal features. In Acoustics, Speech, and Signal Processing, 2000. Icassp’00. Proceedings. 2000
IEEE International Conference on 2, II753–II756.
Hegde (2005). Fourier transform based features for speech recognition. PhD dissertation, Indian Institute
of Technology Madras, Department of Computer Science and Engg., Madras, India.
Heittola, T., A. Klapuri, & T. Virtanen (2009). Musical instrument recognition in polyphonic audio
using source-filter model for sound separation. In Proceedings of Int. Soc. Music Inf. Retrieval Conf.,
327–332.
Kaminsky, I. & Materka (1995). Automatic source identification of monophonic musical instrument
sounds. in proceedings of the IEEE Int. Conf. on Neural Networks, 185–194.
Kashino & T. Nakadai, Kinoshita (1998). Application of bayesian probability network to music scene
analysis. In Proceedings of the International Joint Conference on AI, CASA workshop, 115–137.
Klapuri, A. (2001, May). Multipitch estimation and source separation by the spectral smoothness
principle. Acoustics, Speech and Signal Processing (ICASSP), 2001 IEEE International Conference
on, 5, 3381–3384.
Madikeri, S.R. & H.A. Murthy (2011). Mel filter bank energy based slope feature and its application to
speaker recognition. in proceedings of National Communication Conference(NCC), 155–175.
Murthy, H.A. & B. Yegnanarayana (2011). Group delay functions and its application to speech
processing. Sadhana 36(5), 745–782.
Oppenheim, A. & R. Schafer (1990). Discrete time signal processing. New Jersey: Prentice Hall, Inc.
Rajan, R., M. Misra, & H.A. Murthy (2017). Melody extraction from music using group delay functions.
International Journal of Speech Technology 20(1), 185–204.
Rajan, R. & H.A. Murthy (2013a). Group delay based melody monopitch extraction from music. in
proceedings of the IEEE Int.Conf. on Audio, Speech and Signal Processing, 186–190.
Rajan, R. & H.A. Murthy (2013b). Melodic pitch extraction from music signals using modified group
delay functions. In proceedings of the Communications (NCC), 2013 National Conference on, 1–5.
Rajan, R. & H.A. Murthy (2017). Music genre classification by fusion of modified group delay and
melodic features. In proceedings of the Communications (NCC), 2017 National Conference on, 1–5.
Yu, L. & Y. Yang (2014). Sparse cepstral codes and power scale for instrument identification. In
Proceedings of 2014 IEEE Int. Conf. Acoust., Speech Signal Process., 7460–7464.
726
1 INTRODUCTION
Frequency synthesizers (PLL and VCO combinations) that are capable of operating only at
integer multiples of the phase frequency detector frequency are known as integer-N PLLs and
those which can synthesize finer steps or fractions of the phase frequency detector frequency
are known as fractional synthesizers. Fractional synthesizers generates two types of spurious
signals, namely fractional spurs and integer boundary spurs (IBS). Modern PLLs use higher
order ∑∆ modulators to reduce fractional spurs. Integer boundary spurs are caused by inter-
actions between the RF VCO frequency and the harmonics of reference or PFD frequency.
When these frequencies are not integer related, spur sidebands may appear as sidebands on the
VCO output spectrum at offset frequencies that is the fundamental and harmonics of the dif-
ference frequency between an integer multiple of the reference/PFD frequency and the VCO
frequency. They are spurs generated inside a PLL, but the system designer can predict these
spurs and hence can be avoided. If the difference frequency can be increased in such a way
that it is made larger than the loop bandwidth, then they will be filtered off by the loop filter.
727
The frequency range of operation for the synthesizer was from 700 MHz to 1000 MHz
in 6.25 kHz steps. The output divider translates the VCO frequency range of 4300 MHz to
5300 MHz, to the required output frequency range. For each frequency, the optimum multi-
plier and R-counter settings are programmed to the synthesizer.
3.1 Description
One of the widely used technique for avoiding the IBS is by changing the reference frequency.
This technique will eliminate the IBS caused due to interactions of the VCO with reference as
well as PFD. But it is more costly and space consuming. There must be an additional circuit
for the clock synthesizer. This circuit is intended for handheld radio application, so there is
space as well as cost constraints. The IBS generation due to interaction with reference cannot
be avoided using this technique.
The algorithm works by computing the IBS offset frequency for each output frequency
for various PFD frequencies. It starts with the highest PFD frequency and computes the
IBS offset. A higher PFD frequency will result in a lower N counter and hence better phase
noise performance. If the calculated offset is found to be greater than 2 MHz, the code
exits the loop and program the PLL multiplier and R counter settings for that PFD fre-
quency. The PFD frequency was constrained within 50 MHz to 120 MHz. The lower limit
of 50 MHz is fixed based on a compromise between spurious performance and phase noise.
The charge pump current must be re-programmed for a lower PFD frequency inorder to
keep the loop dynamics constant. Charge pump current changes inversely with the PFD
frequency, so when the PFD frequency increases, the charge pump current must decrease
and vice versa.
3.2 Flowchart
1. Obtain the possible PFD values for multiplier M and R-counter values from 1 to 10.
2. Start with the maximum PFD frequency.
3. Assign value to the variable fPFD. Calculate the value of fVCO based on the present output
divider value
4. For the desired VCO frequency (fVCO), a formula check is done to evaluate whether this
fVCO will generate boundary spur.
Df1 = Absolute [{Roundup (fVCO /fPFD)} × fPFD - fVCO]
Df2 = Absolute [{Rounddown (fVCO /fPFD)} × fPFD - fVCO]
Df1 and Df2 are the spur offset frequencies from the wanted frequency.
5. Check to see if these values are greater than 2 MHz. If this condition is not met, then go
back and check the spur offsets for the next lower PFD frequency.
6. If the spur offset is greater than 2 MHz, then proceed to configure the PLL for that PFD
frequency. The multiplier and R-counter settings must be programmed.
728
Reference = 19.2 MHz.
There are 6 possible PFD values based on the various combinations of multiplier M and
R-counter values from 1 to 10.
1. The possible values are 51.2, 67.2, 76.8, 86.4, 96, 115.2 MHz
2. Starting with the maximum value of 115.2 MHz
3. fPFD = 115.2 MHz. Then fVCO = fOUT × DIV = 768.025 × 6 = 4608.15 MHz
4. Df1 = Absolute [{Roundup (4608.15/115.2)} × 115.2 - 4608.15] = 115.05 MHz
Df2 = Absolute [{Rounddown (4608.15/115.2)} × 115.2 - 4608.15] = 0.15 MHz
5. Df1 & Df2 > 2 MHz is NOT satisfied. So proceed with the next lower value of PFD frequency.
6. fPFD = 96 MHz. The fVCO = fOUT × DIV = 768.025 × 6 = 4608.15 MHz
7. Df1 = Absolute [{Roundup (4608.15/96)} × 96 - 4608.15] = 95.85 MHz
Df2 = Absolute [{Rounddown (4608.15/96)} × 96 - 4608.15] = 0.15 MHz
729
8. Df1 & Df2 > 2 MHz is NOT satisfied. So proceed with the next lower value of PFD
frequency.
9. fPFD = 86.4 MHz. The fVCO = fOUT × DIV = 768.025 × 6 = 4608.15 MHz
10. Df1 = Absolute [{Roundup (4608.15/86.4)} × 86.4 - 4608.15] = 57.45 MHz
Df2 = Absolute [{Rounddown (4608.15/86.4) } × 86.4 - 4608.15] = 28.95 MHz
11. Df1 & Df2 > 2 MHz is satisfied. So proceed with programming the PLL with this PFD
frequency.
So in this case if we are programming the PLL with PFD of 115.2 MHz or 96 MHz,
there will be IBS at frequency offsets ±300 kHz from the center frequency. But if the PFD
is made 86.4 MHz, the spur offsets will be moved away to >29 MHz. But this frequency has
an additional problem, since this VCO frequency is also an integer multiple of the reference
frequency of 19.2 MHz (240th multiple of 19.2 MHz) (4608.15 is not integer multiple, 4608
is tehe 240th multiple). This issue is later discussed in the evaluation results section.
The functioning of the algorithm over the entire frequency range of operation was validated
in a spreadsheet software. The Figure 3, shows the spurious frequency offsets at different
frequencies, with the spur offsets from the fundamental on the right y-axis, converging
frequency number in the left y-axis and VCO frequency on the x-axis. It is evident that there
are several frequency points where the spur offsets crosses the 2 MHz bound, where the PFD
is changed and spurs offsets are moved off to a higher value.
4 EVALUATION RESULTS
Figure 4 shows the spectrum taken at the output frequency of 792.025 MHz. Using an out-
put divider of 6, the VCO frequency will be 4752.15 MHz. 4752 MHz is an integer multiple
of 86.4 MHz (55th multiple). 86.4 MHz is one of the choice of PFD frequency for a reference
of 19.2 MHz.
Now applying the algorithm,
So IBS can be observed as sidebands at 150 kHz offsets. This is seen in the YELLOW
trace.
Now if we re-calculate with 115.2 MHz as PFD frequency,
Hence the offset frequencies are increased to greater than 28 MHz so that it is easily filtered
OFF by the loop filter. RED trace shows the same frequency at 115.2 MHz PFD frequency.
As mentioned earlier the IBS generation due to interaction of the VCO with the refer-
ence cannot be avoided using this technique. To demonstrate this phenomenon, an output
730
REFERENCES
[1] “Wideband Synthesizer with Integrated VCO”, ADF4350 Data Sheet Rev A04/2011, pp. 23–28.
[2] “An avoidance technique for mitigating the integer boundary spur problem in a DDS-PLL hybrid
frequency synthesizer,” 2015 International Conference on Communications and Signal Processing
(ICCSP), Melmaruvathur, 2015, pp. 0443–0446. R. Vishnu and Anulal S. S.
[3] “Analyzing, Optimizing, and Eliminating Integer Boundary Spurs in Phase-Locked Loops with
VCOs at up to 13.6 GHz” Analog Dialogue, Aug 2015, Vol. 49 by Robert Brennan.
[4] “Ultra-Low Noise PLLatinum Frequency Synthesizer With Integrated VCO datasheet (Rev. J)”,
LMX2541 Data Sheet Rev J.
731
Mariya Vincent
Department of Electronics and Communication Engineering, Rajagiri School of Engineering
and Technology, Kakkanad, India
ABSTRACT: Improper deployment of Base Stations (BSs) and Relay Stations (RSs) at
inappropriate locations can result in a decrease in power efficiency and throughput and an
increase in deployment cost, transmission delay, transmission loss and power consumption.
Hence, in order to exploit the advantages of BSs and RSs completely, an effective selec-
tion and deployment scheme for BS and RS, which attains the target Coverage Ratio (CR)
and throughput, at an affordable deployment budget is required for next generation wireless
communication. The superiority in performance of the proposed method over conventional
clustering methods is illustrated through MATLAB simulations.
1 INTRODUCTION
Wireless communication have become a crucial part in our daily personnel and professional
lives by exchanging information reliably from anywhere, any time. The enormous increase in
the number of mobile subscribers has proportionally increased the data rate demand. The
mobile subscribers also demand anytime-anywhere wireless broadband services with high
quality of experience. One of the solutions to provide adequate signal-to-noise ratio (SNR),
increase data throughput, overall coverage and system capacity is to decrease cell area and
deploying more number of BSs which escalates the deployment cost as well as increase the
inter-cell interference. Hence alternate intelligent solutions should be implemented in next
generation wireless communication networks.
Multi-hop relay (MHR) network is one of the promising solutions recommended by the
LTE-A standards to satisfy the above mentioned service requirements. Yang et al. (2009)
in LTE-A and IEEE 802.16j standards, the concept of MHR network is introduced, where
RSs are deployed along with BSs to improve the coverage and capacity. RSs are more suited
solution in the locations where the backhaul connection is expensive or unavailable. The RSs
also have other advantages like less carbon dioxide (CO2) emission, less power consumption,
easier and faster installation and low maintenance cost. Unlike the BS, which is connected
to the backhaul by a wired connection, RS can be wirelessly connected to the BS. More
over, MHR networks help to improve network throughput and at the same time covers more
number of mobile users over a larger coverage area. Hence, MHR network is considered as
one of the potential candidate to facilitate power efficient wireless communication. The use
of RSs to extend battery was presented by Laneman and Womell (2000). Cho et al. (2009)
proposed an RS deployment scheme to reduce the delay due to handovers by deploying the
nodes at the boundaries of adjacent cell edges. A two stage joint BS and RS placement (JBRP)
scheme is proposed by Lu and Liao (2009) where the authors used k-supplier concepts in the
first stage to deploy BSs and greedy-heuristic concepts for the RS deployment in the second
stage. Even though, the study considered joint BS and RS deployment problem, it ignores the
trade-off between the CR and deployment cost.
The proposed scheme also suffers from unbalanced network load. Chang and Lin (2014),
have proposed an uniform clustering based BSs and RSs placement scheme for MHR network.
733
MHR network model consist of BSs, RSs and MSs. The MSs are located within the geo-
graphical area are uniformly distributed in a large number. According to the geographic fea-
tures of that particular area, there exist some feasible candidate positions where the RSs
can be deployed. The RSs are deployed by the network operators at cell edges and coverage
holes to improve the coverage as well as the capacity of the network. Fig. 1 shows the MHR
network model.
In MHR network, the data can be transmitted directly by the BS to the MSs or be relayed
through the RS, which transmits at relatively lesser power. The multi hop transmission proc-
ess from the BS to the MSs through the RS results in a reduced hop distance between a pair of
734
p c 2
SNR = 10 log10 1 (1)
pn 4π fr
where pt, pn, f, c and r represent the transmitted power, thermal noise power, center frequency,
velocity of light and the distance between the transmission stations respectively. The modu-
lation schemes, coding ratios and data rates for various distances and received SNRs are
enumerated in Table 1.
Let DB_R denotes the data rate between BS and RS and DR_M indicates the data rate between
RS and MS. The throughput for indirect transmission between BS and MS is given as,
DB _ R ⋅ DR _ M
DB _ R _ M = (2)
DB _ R + DR _ M
The transmission data rate of MS is decided based on throughput oriented scheme which
is given as,
D = Max( DB _ R _ M , DB _M ) (3)
∑ D( k )
C= k =1
(4)
N
Received Data
Coding SNR rate Distance
Mode Modulation rate (dB) (Mbps) (km)
735
A two-phase effective BS and RS selection and deployment scheme is explored in the present
study to increase the overall power efficiency and system performance. An adaptive neuro
fuzzy based selection scheme makes an adaptive decision for the deployment of nodes from
the candidate positions.
1
CRi = ( NCi ) (5)
N
where NCi is the number of MSs covered under ith RS candidate position and N is the
number of MSs in the geographical area.
The fuzzy sets for CR of ith RS candidate positions takes the linguistic variables like low,
medium, high. The corresponding membership function plot is shown in Fig. 2.
TRi of ith RS candidate position is given by,
Ad ,i − At ,i
TRi = (6)
{
max Ad ,i , At ,i }
where Ad,i and At,i are the average data transmission rate and average traffic demand of MSs
covered by the ith RS candidate position respectively. CR and TR are the two fuzzy inputs
which are considered in the conventional fuzzy based BS deployment schemes [Vincent et al.
(2017)].
Similarly, the fuzzy sets for TR of ith RS candidate positions takes the linguistic variables
like negative, center and positive. The corresponding membership function plot is shown in
Fig. 3. The variations of the crisp values associated with the output with respect to inputs
are shown in Fig. 4. It can be seen that the output is low in magnitude for smaller values of
736
the combinations and increases with the increase in their values. The results obtained are in
accordance with the fuzzy rules.
where DCBS and DCRS is the deployment cost of BS and RS respectively. u and v are the
number of candidate positions of BS and RS respectively. βi and γi are defined as follows:
C ≥ ECR (10)
The performance of the proposed method is analysed using MATLAB 2015a tool. The pro-
posed fuzzy based BS and RS deployment scheme is compared with uniform clustering based
deployment method. Chang, J. and Y. Lin (2014) has been proved that the coverage and
throughput performance of uniform clustering scheme is far better than JBRP scheme. A
practical wireless cellular network environment is simulated. The assumptions and simula-
tion parameters taken are as follows:
The geographic area is square of size 10 km × 10 km.
The system consists of BSs, RSs and MSs.
The candidate positions of BSs and RSs are randomly selected within the geographic area.
The number of candidate position of BS is taken as 6 and the number of candidate posi-
tion for RSs are varied from 20 to 60.
MSs are uniformly distributed in the geographic area.
The coverage radius of BS and RS are 3.2 km and 1.9 km respectively.
The deployment cost of a BS and a RS is 9 and 3 units respectively.
The traffic demand for each MS is uniformly selected between 5 and 15Mbps.
The data rate between MS and BS or between MS and RS are calculated based on the Table 1
Fig. 5 and Fig. 6 shows the deployment result obtained for a sample simulation environ-
ment based on budget constraint of 50 units and coverage constraint of 80% respectively.
From the simulation results, it can be inferred that the BSs and RSs are judiciously deployed
such that maximum MSs are covered. In Fig 5, the proposed scheme achieves a coverage of
more than 80% at a budget cost of 42 units, which is calculated using [7] and is less than the
target budget. In Fig. 6, more number of BSs and RSs are deployed in order to achieve the
expected coverage of 80% without any budget constraint.
Fig. 7 shows the average throughput comparison between the proposed fuzzy based
scheme and uniform clustering schemes for the deployment budget of 50 units. It is observed
738
that the increase in the number of RSs will increase the throughput per user initially. But it
is noticed that after certain number of RSs, the average system throughput remains constant
due to the co-channel interference between RSs and between BSs and RSs. Fig. 8 shows the
average CR for the deployment budget of 50 units.
The increase in the number of candidate positions of BS and the deployment of RSs at the
cell edge will increase the CR. But there is no significant improvement in terms of CR, when
the number of candidate locations of RS is above 50.
Fig. 9 and Fig. 10 shows the comparison results of average throughput per user (Mbps)
and CR for a ECR of 80%. Initially average throughput per user and CR will increase with
739
5 CONCLUSION
In this paper, an adaptive neuro fuzzy based joint BS and RS deployment scheme is
proposed for next generation wireless communication. The proposed deployment scheme
is formulated for maximizing the network coverage, throughput and power efficiency of
the system. The performance of the proposed scheme is compared with the conventional
uniform clustering based scheme. Our proposed scheme satisfies both budget and coverage
constraints. The proposed method shows improved performance than conventional scheme
for all the considered combinations. The simulation results prove that the proposed scheme
is computationally simple and sustainable for different channel and path loss conditions.
Since the selection and deployment is carried out based on adaptive neuro fuzzy logic,
the proposed method can be considered as a more suited solution for real-time channel
conditions. An adaptive neuro fuzzy based scheme, which considers more number of input
parameters like interference and CO2 emission can be considered for the future study.
REFERENCES
[1] Akyildiz, I.F, D.M. Gutierrez-estevez, R. Balakrishnan and E. Chavarria-reyes (2014). LTE-
Advanced and the evolution to Beyond 4G (B4G) systems, Physical Communication, Vol. 10,
pp. 31–60.
[2] Chang, B.J., Y.H. Liang and S.S. Su (2015). Analyses of Relay Nodes Deployment in 4G
Wireless Mobile Multihop Relay Networks, Wireless Personnel Communication, Vol. 83, No. 2,
pp. 1159–1181.
[3] Chang, J. and Y. Lin (2014). A clustering deployment scheme for base stations and relay stations in
multi-hop relay networks, Computers and Electrical Engineering, Vol. 40, pp. 407–420.
[4] Fettweis., G., and P. Rost, (2011). Green communications in cellular networks with fixed relay
nodes, Cambridge University Press.
[5] Jau-Yang. C and Lin, Y.S. (2015). An Efficient Base Station and Relay Station Placement Scheme
for Multi-hop Relay Networks. Wireless Personal Communications, Vol. 82, No. 3, pp. 1907–1929.
[6] Laneman, J.N. and G.K. Womell (2000). Energy-Efficient Antenna Sharing and Relaying for
Wireless Networks, IEEE Wireless Communication Networking Conference, pp. 7–12.
[7] Lu, H. and W. Liao (2009). Joint Base Station and Relay Station Placement for IEEE 802.16 j
Networks, IEEE Global Telecommunication Conference, pp. 1–5.
[8] Vincent, M., K.V. Babu, M. Arthi and P. Arulmozhivarman (2017). Corrigendum to “Power-aware
fuzzy based joint base station and relay station deployment scheme for green radio communication”
[J. Sustain. Comput.: Inform. Syst. 13 (2017) 1–14].
[9] Yang, Y., H. Hu, J. Xu and G. Mao (2009). Relay Technologies for WiMAX and LTE-Advanced
Mobile Systems, IEEE Communication Magazine, Vol. 47, pp. 100–105.
740
ABSTRACT: Semantic Textual Similarity (STS) is to decide the level of semantic compara-
bility between pairs of sentences. STS assumes an essential part in Natural Language Process-
ing errands, which has drawn considerable attention in the field of research in recent years. This
survey discusses multiple features including word alignment-based similarity, sentence vector-
based similarity, and sentence constituent similarity to assess the correctness of sentence pairs.
1 INTRODUCTION
Semantic Textual Similarity (STS) is basically a measure used to compute the similarity
between two textual snippets based on the likeliness of their meaning. Measuring semantic
similarity is a difficult assignment since it is simple express a comparable idea in different
ways. Therefore, STS is a deep natural language understanding problem. STS has been gen-
erally utilized as a part of natural language processing tasks such as machine translation
(MT), summarization, generation, question answering (QA), short answer grading, semantic
search, dialog and conversational frameworks and so on.
Previous researchers on semantic text similarity have been centered on records and para-
graphs, while correlation protests in numerous NLP assignments are writings of sentence length,
for example, Video descriptions, News headlines, and beliefs, etc. In this paper, we examine
semantic similarity between two sentences. Given two input textual snippets, we have to conse-
quently choose a score that demonstrates their semantic similarity. In general, the fundamental
undertaking is to process semantic similarity for the given two English sentences in the range
[0, 5], where the score increments with similarity (i.e., 0 indicates no similarity and 5 demon-
strates indistinguishable). Similarity score with the explanation for some English sentences is
shown in Figure 1. The assessment metric utilized is the Pearson correlation coefficient.
STS is also firmly identified with textual entailment (TE) (Dagan et al., 2006), paraphrase
recognition (PARA) (Dolan et al., 2004) and semantic relatedness. All it differs from its tasks.
Textual entailment recognition is the undertaking of choosing, given two text fragments,
regardless of whether the significance of one text is entailed (can be gathered) from another
text. On account of TE, the equivalence is directional. eg: an auto is a vehicle, however a
vehicle isn’t really an auto.
Paraphrase Recognition is a task expects to distinguish in the event that two sentences
have a similar importance of using different words. Two important aspects of paraphrase is
that: same meaning and different words. These two concepts are quite intuitive, but difficult
to formalize. For example, consider the sentences below,
1. Hamilton Construction Company built the new extension.
2. The new extension was built by Hamilton Construction Company.
From the above example, could identify that (1) and the (2) are actually paraphrased.
Since textual entailment and paraphrase detection catches degrees of meaning overlap rather
than making binary classifications of particular relationships. Essentially, semantic related-
ness communicates an evaluated semantic relationship. It is nonspecific about the possibility
743
of the relationship with contradicting material so far being a contender for a high score. For
example, “night” and “day” are abundantly related yet not particularly comparable.
At the beginning of 2012, many efforts are undertaken to have STS over English sentence
pairs. STS shared undertaking has been held yearly since 2012, which provides a platform to
have new algorithms and models. During this period of time, diverse similarity method and
datasets have been explored. One of such similarity signals emerged is a Sultan’s alignment
based method. Also found that deep learning is becoming an upcoming feature set for such
an evaluation mechanism. All things considered, looks into are going ahead to discover the
best performing feature set.
This paper is sorted out as takes after: Section 2 gives the methods used so far for STS.
Section 3 discusses feature sets for STS and provides a comparison between them. Section 4
provides a discussion on datasets used. Section 5 discusses various future research directions
and Section 6 gives a brief concluding comment.
2 METHOD
The methods used so far can be separated into three general classes: alignment approaches,
vector space approaches, and machine learning approach (Cheng et al., 2016). Figure 2,
shows the classification of methodologies based on approaches used.
Alignment approaches align words or phrases in a sentence pair and after that take similar-
ity measure as the quality or scope of alignments. Vector space approaches speak to sentences
as bag-of-words vectors and here the similarity measure will be the vector similarity. Machine
learning approaches consolidate diverse similarity measures and feature utilizes supervised
machine learning models.
This survey aims to consider the evidence from those three set of categories to measure
semantic text similarity between two sentence pairs. Basically, from sentence pairs, we extract
744
the features called alignment-based similarity features, vector-based similarity features, and
sentence constituent similarity features. From the study it found that the extracted feature set
are then joined through a Support Vector Regression (SVR) model to convey the similarity
score between the sentence sets. Figure 3, depicts the classification of feature sets used.
3 FEATURE EXTRACTION
745
4 DISCUSSION ON DATASETS
The data set used for the task is of a combination of different settings. The survey discusses
three different settings. Five sets of the dataset in (Cheng et al., 2016) consist of dataset
namely Questions and answers (Q&A) Answer-Answer data set, Headlines data set, Plagia-
rism Detection data set, Post Edited Machine Translations data set and Q&A Question-
Question data set. These datasets were taken from the previous four years (2012–2015) of the
SemEval English STS assignment.
Sultan et al. alignment mechanism in (Sultan et al., 2014) was based on six datasets in par-
ticular deft-forum, deft-news, headlines, images, OnWN and tweet-news. These mentioned
datasets are from SemEval STS 2014. The sentences were gathered from a variety of sources
like discussion forums, news articles, and news headlines and so on. Consist each dataset with
a minimum of 750 sentence pairs.
ExB Themis (Hanig et al., 2015) is a multilingual STS system for English and Spanish lan-
guages. Therefore dataset on English and Spanish are utilized for the task. For English acces-
sible dataset proposed by Agirre et al. in 2012–2014 are utilized. These datasets are taken from
2012–2014. The data set collection in (Agirre et al., 2012) are namely MSRpar, MSRvid, OnWN,
SMTnews, and SMTeuroparl. Similarly, dataset in (Agirre et al., 2013) comprises of HDL,
FNWN, OnWN, SMT and TYPED. Whereas in (Agirre et al., 2014) it is of namely HDL,
OnWN, OnWN, Deft-news, Images and Tweet-news. ExB Themis formed a dataset from above
mentioned as a new combination which comprises of the domain namely forum, students, belief,
headlines and images. For Spanish has taken datasets from Wikipedia and Newswire. Table 1,
shows the overview of the feature sets discussed so far and the datasets used by them in a nutshell.
The performance on each data set is evaluated using the metric called Pearson Correlation
Coefficient. It found that (Cheng et al., 2016) proved a mean of 0.69996 Pearson correlation.
747
Proposed approaches
Sultan aligner (Sultan et al., 2014) found a weighted mean Pearson correlation between
0.7337 to 0.7610. In ExB Themis (Hanig et al., 2015) found a mean of 0.7942 Pearson cor-
relation for English dataset and a mean of 0.6725 Pearson correlation for Spanish data.
Semantic Textual Similarity (STS) drawn a considerable attention in recent years. It reveals
that STS can be improved further to a greater extent. So far discussed sentence similarity of
monolingual ones. Therefore it shows an open idea to extend the STS in multilingual seman-
tic similarity.
The existing study shows that it is difficult to assign semantics of units larger than words.
In this way, it is an ideal opportunity to create calculations that can adjust to necessities
looked by different information areas and application.
6 CONCLUSION
Semantic Textual Similarity measures the extent of similarity between two sentences. Differ-
ent features including alignment-based similarity features, vector-based similarity features,
and sentence constituent similarity features are used to have the semantic score between the
sentences. Pearson correlation coefficient is used as the evaluation metric. Sentence similarity
based on support vector regression shows the best performance among all other evaluation
strategies with the feature sets.
REFERENCES
Agirre, E., Banea, C., Cardie, C., Cer, D.M., Diab, M.T., Gonzalez-Agirre, A., … & Wiebe, J. (2014).
SemEval-2014 Task 10: Multilingual Semantic Textual Similarity. SemEval@ COLING, 81–91.
Agirre, E., Banea, C., Cardie, C., Cer, D.M., Diab, M.T., Gonzalez-Agirre, A., … & Rigau, G. (2015).
SemEval-2015 Task 2: Semantic Textual Similarity, English, Spanish and Pilot on Interpretability.
SemEval@ NAACL-HLT, 252–263.
Agirre, E., Cer, D., Diab, M., Gonzalez-Agirre, A., & Guo, W. (2013). sem 2013 shared task: Semantic
textual similarity, including a pilot on typed-similarity. In* SEM 2013: The Second Joint Conference
on Lexical and Computational Semantics. Association for Computational Linguistics.
Agirre, E., Diab, M., Cer, D., & Gonzalez-Agirre, A. (2012). Semeval-2012 task 6: A pilot on semantic
textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-
Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth
International Workshop on Semantic Evaluation, Association for Computational Linguistics, 385–393.
748
749
ABSTRACT: Sentence compression makes a sentence shorter whilst preserving its mean-
ing and grammar; it is applicable in different fields such as text summarization. Among the
various methods to shorten sentences, extractive sentence compression is still far from being
solved. Reasonable machine-generated sentence compressions can often be obtained by con-
sidering subsets of words from the original sentence. This paper presents a survey of the
different approaches to extractive sentence compression.
1 INTRODUCTION
Sentence compression is the process of shortening a text whilst keeping the idea of the origi-
nal sentence. It is in such a way that the grammar and structure of the sentence is greatly
simplified, while the underlying meaning and information remains the same. It makes the
necessary to speak by eliminating unnecessary words or phrases. Sentence compression is
also known as text simplification or summarization.
Sentence compression is an important area of research because it is a backbone for differ-
ent Natural Language Processing (NLP) applications. Normally, human languages contain
complex compound constructions which causes difficulties in automatic modification, clas-
sification or processing of human-readable text. Today, sentence compression techniques are
widely used in many industries, mainly as a part of data mining and machine learning. It has
been widely used for displaying on small screens (Corston-Oliver, 2001), such as in television
captions, automatic title generation (Vandeghinste & Pan, 2004), search engines, topic detec-
tion, summarization (Madnani et al., 2007), machine translation, paraphrasing, and so forth.
There are various techniques for sentence compression, including word or phrase removal,
using shorter paraphrases, and common-sense knowledge. There are, primarily, two types
of sentence compression: extractive and abstractive. In the extractive method, objects are
extracted from the source sentence without modifying the objects themselves. The main idea
here is to find the subset of important words that contain the information of the entire sen-
tence. An example of this is key phrase extraction, where the goal is to select individual words
or phrases that are important (without modifying them) to create a short sentence; it should
preserve the meaning of the original sentence. Abstractive sentence compression considers
semantic representation of the sentence in order to make it simpler.
The task of extractive sentence compression is very complex. It is not simply shortening
a sentence; the properties of the original sentence should be preserved. The performance of
a technique depends upon the compression rate, the grammar and keeping the important
words from the original sentence. Various approaches for extractive sentence compression
include: generative noisy channel models (Knight & Marcu, 2002); tree transduction model
(Knight & Marcu, 2002; Cohn & Lapata, 2007, 2009; Yao et al., 2014); structured discrimina-
tive compression model; Long-Short Term Memory (LSTM) (Clarke & Lapata, 2008); ILP
(Yao & Wan, 2017; De Belder & Moens, 2010; Wang et al., 2013). In addition, the different
techniques make use of machine-learning algorithms such as the Maximum Entropy Model
and Support Vector Machines (SVMs).
751
2 LITERATURE REVIEW
Different approaches are put forward for extracting the important words from a sentence in
order to form a new compressed sentence. Figure 1 shows the approaches discussed here for
extractive sentence compression.
Figure 2. Example of SCFG; dotted lines denote variable correspondences, and denotes node deletion.
752
3 CRITICAL ANALYSIS
An analysis of the above discussed approaches for extractive sentence compression is con-
ducted as follows by considering the particular contribution of each approach.
753
Compression Compression
Model Methodology rate Accuracy rate Accuracy
There are different datasets available for analyzing different sentence compression tasks.
Most of the approaches discussed in this survey (Cohn & Lapata, 2009; Sakti et al., 2015;
Galanis & Androutsopoulos, 2010; Yao & Wan, 2017) use the same corpora for evaluation
that was annotated by human annotators.
These four papers consider both written corpora and spoken corpora. For written corpora,
they consider sentences from the British National Corpus (BNC) and sentences from the
American News Text Corpus. The spoken corpora were annotations from broadcast news.
Mostly, compression rate and F1 score measures are considered for automatic evaluation.
The grammatical relationship of generated compression against the gold standard compres-
sion is measured by F1 score. Whereas the LSTM-based method of Sakti et al. (2015) does
not focus on grammar; instead, it considers the importance measure along with compression
rate. Table 1 gives the comparison data.
5 CONCLUSION
Basically, longer sentences are simply a waste of memory and time. Therefore, sentence com-
pression is a necessary task. Extractive sentence compression has a wide range of applications
in different domains. It is one of the key activities of text processing in NLP. Google’s Hand-
Fed AI (artificial intelligence) is one of the best examples. It can also be applied in fields such as
phrasal substitution, especially for figurative expressions (Liu and Hwa, 2016). Hence a survey
on extractive sentence compression may be helpful in order to choose the best technique.
There is a wide range of methodologies for extracting the relevant words from a sentence
in order to form a new compressed sentence. Different authors choose different techniques
for extracting and for making a good sentence. We have discussed a few approaches to extrac-
tive sentence compression and made a comparative study of them.
Extractive sentence compression can be improved further. Most of the methods discussed
here are suitable for smaller sentences but less suitable for larger ones. In future, we can train
with data on a large scale. Thus, we can improve the compression accuracy on large sentences
as well. Extractive compression can be modeled using a sampling-based method. It will be
applicable in different areas such as ECG compression.
REFERENCES
Berg-Kirkpatrick, T., Gillick, D. & Klein, D. (2011). Jointly learning to extract and compress. In
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human
Language Technologies (pp. 481–490). Stroudsburg, PA: Association for Computational Linguistics.
755
756
ABSTRACT: A figurative language often uses words with a meaning different from its literal
meaning. Detection of figurative language helps in computational linguistics and social media
sentiment analysis. The majority of social media comments make use of figurative language for
better expression of user emotions. Figurative language such as simile, humor, sarcasm, and
irony have widespread uses within social media. The purpose of this paper is to survey various
methods for detection of figurative language based on their distinguishing features.
1 INTRODUCTION
Figurative language expresses the feeling of a writer rather than its literal interpretation. In
the literal interpretation, the writer expresses things as such it is. Figurative language offers
a better understanding and analysis of the text than its literal interpretation. In the current
social media arena, people make use of various figurative language like simile, humor, irony
and sarcasm to express their emotions. Identifying features that better distinguish figurative
language is important for the detection process.
Most of the figurative language has an implicit meaning embedded within it. This provides
difficulty in identifying actual sense meant by the figurative language. Each figurative lan-
guage has specific features exclusively determined for them to better distinguish them, which
can be different for different languages. Recognition of this figurative language, whilst also
combining various features that better distinguish them, acts as ‘trending’ in social media.
There exists a wide range of figurative language and this survey focuses on simile, sarcasm,
irony and humor because most micro-blogging platforms make use of these figures of speech
mostly. A simile is a figure of speech that essentially compares two different things with the
help of words such as: like, as, than. They can have implicit or explicit properties mentioned.
For example: Our room feels like Antarctica. Sarcasm usually makes use of words to express
meaning that is the opposite of its actual meaning. It is mainly used to criticize someone’s
feelings. For example: I love the way my sweetheart cheats on me. Irony includes words that
are the opposite of the actual situation. For example: Butter is as soft as a slab of marble.
Humor is also a figure of speech which is used to produce the effect of laughter and to make
things funny. Humor provides the direct implication of the situation. For example: He faces
more problems than a math book has. Identifying features that better distinguish these figures
of speech act as a trending topic in social media.
Figurative language detection is necessary in different applications such as computational
linguistics, social multimedia, and psychology. Text summarization, machine translation sys-
tems, advertisements, news articles, sentiment analysis and review processing systems make
use of figurative language.
This survey mainly focuses on different methods that make use of various distinguishing
features of different figurative language. It also helps to identify features that better distin-
guish each figurative language.
This paper is organized as follows. Section 2 gives a formal definition of the figurative
language detection with an example; also identified are various distinguishing features for
757
Figure 1. Classification of methodologies for figurative language detection based on the features used.
758
Table 1. Overview of the compared features used for detection of several figurative languages.
Proposed approaches
Figurative
Type of feature Model Highlight of features language
Lexical features Thu & New (2017) Unigram, bigram and Simile, sarcasm,
trigrams are considered irony, humor
Joshi et al. (2015) Unigrams Sarcasm
Barbieri & Presence of frequent and Irony and humor
Saggion (2014) rare words
Khokhlova et al. (2016) N-grams Irony and sarcasm
Syntactic and Barbieri & Synonyms and ambiguity Irony and humor
semantic features Saggion (2014)
Qadir et al. (2016) Syntactic rules Simile
Khokhlova et al. (2016) Parts of speech and hashtags Irony and sarcasm
Pragmatic features Barbieri & Emoticons, laughs and Irony and humor
Saggion (2014) punctuation marks
Joshi et al. (2015) Count of capital letters Sarcasm
Khokhlova et al. (2016) Presence of interjections Irony and sarcasm
González-Ibánez Positive and negative Sarcasm
et al. (2011) emoticons, ToUser
Emotion-based and Thu & New (2017) Eight basic emotions from Simile, sarcasm,
sentimental features EmoLex and three irony, humor
sentimental features from
Vader
Khokhlova et al. (2016) Eight basic emotions from Irony and sarcasm
EmoLex
Qadir et al. (2015) Simile component polarity and Simile
simile connotation polarity
Joshi et al. (2015) Polarity of each word, explicit Sarcasm
and implicit incongruity
Barbieri & Polarity of synsets of words, Irony and humor
Saggion (2014) positive sum, negative
sum, positive-negative gap,
positive-negative mean,
positive single gap and
negative single gap
761
3 FURTHER SUGGESTIONS
Identification of occurrence of one figurative language over another can be identified for
better understanding of the text. For example, identification of sarcastic similes, ironic simi-
les and humorous similes. Automatically inferring implicit properties in the sentence can be
done to improve the efficiency of the system. Figurative language detection can be proc-
essed from different angles that make use of cognitive and psycholinguistic information, ges-
tural information, tone and paralinguistic cues. Another suggestion is to build a system that
works efficiently on large texts of the documents. All these are possible directions for future
research in this area.
4 CONCLUSION
Figurative language provides better analysis and understanding of text than its literal inter-
pretation. There are different types of figurative language terms. But in the case of social
media, simile, sarcasm, irony and humor have an impact. Supervised classification methods
are present for automatic detection of figurative language. Identification of features that
better distinguish figurative language helps in improving the efficiency of the system. Each
figurative language has its own features. Through this survey, we have tried to highlight the
distinguishing features of each figurative language. This survey is conducted with the hope of
shedding some light on the different features of the various figures of speech and how they
can be incorporated for the automatic detection of those languages simultaneously.
REFERENCES
Bamman, D. & Smith, N.A. (2015). Contextualized sarcasm detection on Twitter. In Ninth International
AAAI Conference on Web and Social Media (pp. 574–577). New York, NY: AAAI Press.
Barbieri, F. & Saggion, H. (2014). Automatic detection of irony and humour in Twitter. In S. Colton, D.
Ventura, N. Lavrac & M. Cook (Eds.), Proceedings of the Fifth International Conference on Compu-
tational Creativity, Ljubljana, Slovenia, 10–13 June 2014 (pp. 155–162).
Crossley, S.A., Kyle, K. & McNamara, D.S. (2017). Sentiment Analysis and Social Cognition Engine
(SEANCE): An automatic tool for sentiment, social cognition, and social-order analysis. Behavior
Research Methods, 49(3), 803–821.
Davidov, D., Tsur, O. & Rappoport, A. (2010). Semi-supervised recognition of sarcastic sentences in
Twitter and Amazon. In Proceedings of the Fourteenth Conference on Computational Natural Lan-
guage Learning (pp. 107–116). Stroudsburg, PA: Association for Computational Linguistics.
Fellbaum, C. (1998). WordNet: The encyclopedia of applied linguistics. New York, NY: John Wiley &
Sons.
Fersini, E., Pozzi, F.A. & Messina, E. (2015). Detecting irony and sarcasm in microblogs: The role of
expressive signals and ensemble classifiers. In Proceedings of IEEE International Conference on Data
Science and Advanced Analytics (DSAA) 2015 (pp. 1–8). New York, NY: IEEE.
762
763
1 INTRODUCTION
Interpersonal relationships have long been studied by analysts in several domains to get a
better understanding of the narratives. These relationships exhibit a variety of phenomena
like family, friendship, hostility, and romantic love. We can see that the narratives are a rich
reflection of these relationships and this provides a better medium for the analysis. Two
approaches used for natural language understanding are an event-centric approach and a
character-centric approach.
An event-centric approach tries to understand the narrative based on the events described
within it. These methods aim to demonstrate the given text using sequences of events, their
participants, and the relationships between them. Such a representation is called a ‘script’.
Another representation of the narrative includes frames, plot units, and schemas.
On the other hand character-centric approaches consider the characters involved in the
narrative. Identifying the relationship between the characters is a better approach to narra-
tive understanding. Recent works are focused on creating a structure called social networks,
sometimes called signed networks, to model relationships between characters. These social
networks are constructed based on co-occurrence of characters in conversations, social
events, and so on.
In general, characterizing the nature of relationships between individuals can assist auto-
matic understanding of the text by explaining the actions of people mentioned in the text
and building expectations of their behavior toward others. Modeling relationships has many
real-world applications, such as predicting possible relationships between people using their
posts or messages in social media, personalizing newsfeeds, predicting virality, and suggesting
friends or topics of interest, for a particular user.
Relationship extraction is commonly achieved through supervised or unsupervised meth-
ods. Some may use hybrid approaches for this. These approaches make use of different
machine-learning algorithms for classification, for example, Naive Bayes and Support Vec-
tor Machines (SVMs). Relationship extraction has the potential to use deep learning models
for a better performance.
765
Figure 1. Classification of methodologies for relationship extraction based on the approach used.
766
Proposed approaches
Type of
approach Model Dataset/Domain Extracted relation Results
Supervised Culotta et al. Articles from wikipedia Mother, cousin, friend, F1 = 0.6363
(2006) education, boss, member P = 0.7343
of and rival R = 0.5614
Chaturvedi et al. SparkNotes, AMT Cooperative and F1 = 0.76
(2016) non-cooperative P = 0.76
R = 0.76
Srivastava et al. Dataset of 300 English Cooperative and F1 = 0.805
(2016) novel summaries non-cooperative P = 0.806
R = 0.804
Unsupervised Krishnan and Dataset of movie scripts Inducing the social F1 = 0.83
Eisenstein (2014) function of address terms
Chaturvedi et al. Dataset of 300 English Familial, Desire, Active, F1 = 0.55
(2017) novel summaries Communicative and
Hostile
Hybrid Devisree and Raj Collection of kids’ Parent-Child, Friendship, P = 0.87
(2016) stories No-Relation R = 0.79
Makazhanov et al. Novels Familial relations A = 0.78
(2014)
769
Characterizing the relationships between people can be useful for understanding the text.
Most of the methodologies presented so far were a domain-specific approach. So we need
to extend this approach to different domains. The current approaches made several assump-
tions about the types of relationships that can be relaxed in future work. Future work could
also focus on studying asymmetric relationships. Other directions of study could include the
usefulness of varying text modes (genre, number of characters, time period of novels, etc.)
or mining ‘relationship patterns’ from such texts. Recent advances in deep learning can be
used to get a better result in relation extraction between characters. Error analysis shows
that mismatched co-reference labeling is the most common source of errors in the existing
models. In future, these models could be customized to study the various stages of certain
types of relationships. In addition, including more contextual information can improve the
characterization of relationships.
4 CONCLUSION
There are different methodologies for extracting relationships between characters. Com-
monly used approaches for modeling relationships can be classified into two: supervised
and unsupervised. Some methodologies use a mixture of approaches and are called hybrid
approaches. They take advantage of different methodologies to get better results. Some of
the recent approaches for extracting relationships that belong to these three classes are dis-
cussed in this paper. Research in this area still has room for improvement.
REFERENCES
Agarwal, A., Kotalwar, A., Zheng, J. & Rambow, O. (2013). SINNET: Social interaction network
extractor from text. In The Companion Volume of the Proceedings of IJCNLP 2013: System Demon-
strations, 14–18 October 2013, Nagoya, Japan (pp. 33–36).
Augenstein, I., Das, M., Riedel, S., Vikraman, L. & McCallum, A. (2017). SemEval 2017 Task 10:
Science IE - Extracting keyphrases and relations from scientific publications. arXiv:1704.02853.
Bost, X., Labatut, V., Gueye, S. & Linares, G. (2017). Extraction and analysis of dynamic conversational
networks from TV Series.
Brennan, J.R., Stabler, E.P., Van Wagenen, S.E., Luh, W.-M. & Hale, J.T. (2016). Abstract linguistic
structure correlates with temporal activity during naturalistic comprehension. Brain and Language,
157, 81–94.
Chaturvedi, S., Iyyer, M. & Daumé, H., III. (2017). Unsupervised learning of evolving relationships
between literary characters. In Proceedings of the Thirty-First AAAI Conference on Artificial Intel-
ligence (AAAI-17) (pp. 3159–3165). Palo Alto, CA: AAAI Press.
Chaturvedi, S., Srivastava, S., Daumé, H., III & Dyer, C. (2016). Modeling evolving relationships
between characters in literary novels. In Proceedings of the Thirtieth AAAI Conference on Artificial
Intelligence (AAAI-16), 12–17 February 2016, Phoenix, Arizona (pp. 2704–2710). Palo Alto, CA:
AAAI Press.
Culotta, A., McCallum, A., & Betz, J. (2006). Integrating probabilistic extraction models and data min-
ing to discover relations and patterns in text. In Proceedings of the Human Language Technology Con-
ference of the North American Chapter of the Association of Computational Linguistics (pp. 296–303).
Stroudsburg, PA: Association for Computational Linguistics.
Devisree, V. & Raj, P.R. (2016). A hybrid approach to relationship extraction from stories. Procedia
Technology, 24, 1499–1506.
Frunza, O., Inkpen, D. & Tran, T. (2011). A machine learning approach for identifying disease-treatment
relations in short texts. IEEE Transactions on Knowledge and Data Engineering, 23(6), 801–814.
He, H., Barbosa, D. & Kondrak, G. (2013). Identification of speakers in novels. In Proceedings of the
51st Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1312–1320).
Hoffmann, R., Zhang, C., Ling, X., Zettlemoyer, L. & Weld, D.S. (2011). Knowledge-based weak
supervision for information extraction of overlapping relations. In Human Language Technologies:
770
771
ABSTRACT: Morphosyntactic lexicons are the vocabulary of a language and they have
a vital role in the field of Natural Language Processing (NLP). They provide information
about the morphological and syntactic roles of words in a language. There are various meth-
odologies proposed for the generation of highly accurate and large volumes of morphosyn-
tactic lexicons, which are mainly focused on Machine Learning (ML) approaches. The aim of
this survey is to explore various methodologies for morphosyntactic lexicon generation and
discuss their advantages and disadvantages.
1 INTRODUCTION
773
2 MORPHOSYNTACTIC LEXICON
2.1 Definition
A typical morphosyntactic lexicon contains the base forms and inflected forms of words
along with their grammatical and semantic information such as grammatical category and
sub-categorization features. Table 1 shows a subset of a morphosyntactic lexicon for the
English language.
The given lexicon consists of the base form and all the inflected forms for the word ‘cry’.
Each lexical entry is associated with certain attributes such as part of speech, number infor-
mation and gender information. If we can determine the lexical level and the morphological,
syntactic, and semantic relation between these words with the help of unlabeled corpora,
then the generation of large volumes of morphologically and syntactically annotated corpora
are possible. In this survey, the methods proposed are all language-independent so that they
can be adapted independently to any language.
774
776
Soricut and Och (2015) used the Wikipedia resource for the English language. For German,
French and Spanish, they used WMT-2013 Shared Task, and the Arabic Gigaword corpus
was used for the Arabic language. Various other manually created datasets are used for train-
ing and evaluation for individual languages.
3 FUTURE RESEARCH
Semi-supervised learning approaches best fit the problem of morphologically and syntacti-
cally annotated lexicon generation if corpora are available for individual languages. Most
of the approaches discussed so far considered only prefix and suffix transformations. Other
morphological transformations should also be considered while constructing the models
because the model should be capable of handling any complex language. If the methodology
is language-independent, then it can be applied to NLP applications in any language.
4 CONCLUSION
ML approaches can obtain higher accuracy and exhibit better performance. The availability
and correctness of the lexical resource during training and evaluation largely influences the
NLP applications.
This survey has mainly focused on various approaches to addressing the problem of lexi-
con generation with syntactic and morphological information using ML techniques such as
supervised, semi-supervised and unsupervised methods in which most of the methodologies
are language-independent. This makes the adaptation of the various methodologies into any
language possible, which, in turn, accelerates the development of various natural language
applications in individual languages. Semi-supervised learning methodologies are efficient in
this area and can generate more accurate and large volumes of morphosyntactic lexicons.
REFERENCES
Ahlberg, M., Forsberg, M. & Hulden, M. (2014). Semi-supervised learning of morphological para-
digms and lexicons. In Proceedings of the 14th Conference of the European Chapter of the Association
for Computational Linguistics (pp. 569–578). Gothenburg, Sweden: Association for Computational
Linguistics.
Ahlberg, M., Forsberg, M. & Hulden, M. (2015). Paradigm classification in supervised learning of
morphology. In Human Language Technologies: The 2015 Annual Conference of the North American
Chapter of the Association for Computational Linguistics (pp. 1024–1029). Stroudsburg, PA: Associa-
tion for Computational Linguistics.
Allen, J. (1995). Natural language understanding. London, UK: Pearson.
777
778
1 INTRODUCTION
Question Answering (QA) is a particular type of Information Retrieval (IR). The main aim
of Question Answering Systems (QAS) is to retrieve an exact answer to a question in natural
language. The correlation among the QA framework and information retrieval technology, in
IR, the input query containing keywords, and the output consists of a set of documents that
are important to the query asked by a user. QA is not the same as IR in that the user can ask
a question specifically of the system in natural language. The system at that point answers the
question as a concise answer extracted from a source document.
QASs developed in different domains, such as information sources, sorts of questions,
arrangements of answers, and so on; the quantity of such QASs is too huge. The main type
of questions presented by users in natural language are the factoid questions, for example,
“When did the Egyptian revolution occur?”. Also, QA frameworks are arranged into two
main classes, in particular open domain QA systems and closed domain QA systems.
This survey discusses the different methodologies used for answer extraction and the mer-
its and demerits of each. In this paper, Section 2 describes the overview of QAS and different
criteria in support of classifying the large number of QASs available. Section 3 discusses
various future research directions. Section 4 is the conclusion.
779
780
781
Xu et al. (2003)
This uses various approaches, consisting of data retrieval and distinct linguistic and extrac-
tion tools like parsing, name finding, co-reference resolution, extraction of relation, proposi-
tion and established patterns, so it adopts a hybrid approach. Here they performed three runs
utilizing the F-metric for evaluation. In the main run, BBN2003A, the web was not utilized
as part of the answer finding. In the second run, BBN2003B, answers for factoid questions.
Finally, BBN2003C was the same as BBN2003B except that if the answer for a factoid ques-
tion was discovered various times in the corpus, its score was supported. The performance
for BBN2003A, BBN2003B, BBN2003C runs was 52.1%, 52.0% and 55.5% respectively. The
limitation of this approach, is that the experiments only tested for “what” and “when” ques-
tions. It didn’t consider other factoid questions such as “when” and “where” questions.
Type of information
Approach Model Key problem source
782
In this survey, we classified QASs on the basis of criteria such as answer ranking and answer
extraction. The occurrence frequency of candidate answers, similarity between question and
answer, relevance between information source and question functions are taken from the
answer ranking method. And, for answer extraction text patterns, named entity and similar-
ity computing between sentences features are taken. In future, valid answer extraction strate-
gies are needed. For open domain QA systems, it returns fake candidate answers, noisy data,
and imprecise candidate answers which influence the final answer.
REFERENCES
Allam, A.M.N. & Haggag, M.H. (2012). The question answering systems: A survey. International Jour-
nal of Research and Reviews in Information Sciences (IJRRIS), 2(3).
Figueroa, A. & Neumann, G. (2014). Category-specific models for ranking effective paraphrases in
community question answering. Expert Systems with Applications, 41(10), 4730–4742.
Gupta, V. & Lehal, G.S. (2009). A survey of text mining techniques and applications. Journal of Emerg-
ing Technologies in Web Intelligence, 1(1), 60–76.
Hao, T. & Agichtein, E. (2012). Finding similar questions in collaborative question answering archives:
toward bootstrapping-based equivalent pattern learning. Information Retrieval, 15(3–4), 332–353.
Harabagiu, S.M., Moldovan, D.I., Pasca, M., Mihalcea, R., Surdeanu, M., Bunescu, R.C. &
Morarescu, P. (2000). FALCON: Boosting knowledge for answer engines. In TREC. 9 (pp. 479–488).
Hirschman, L. & Gaizauskas, R. (2001). Natural language question answering: the view from here.
Natural Language Engineering, 7(4), 275–300.
Kolomiyets, O. & Moens, M.F. (2011). A survey on question answering technology from an information
retrieval perspective. Information Sciences, 181(24), 5412–5434.
Lampert, A. (2004). A quick introduction to question answering. CSIRO ICT Centre.
Lee, C., Shih, C., Day, M., Tsai, T., Jiang, T., Wu, C., Sung, C. & Hsu, W. (2005). ASQA: Academia
Sinica question answering system for NTCIR-5 CLQA. In Proceedings of NTCIR-5 Workshop
Meeting.
Liu, D.R., Chen, Y.H. & Huang, C.K. (2014). QA document recommendations for communities of
question–answering websites. Knowledge-Based Systems, 57, 146–160.
Liu, D.R., Chen, Y.H., Shen, M. & Lu, P.J. (2015). Complementary QA network analysis for QA
retrieval in social question-answering websites. Journal of the Association for Information Science and
Technology, 66(1), 99–116.
Liu, Y., Yi, X., Chen, R. & Song, Y. (2016). A survey on frameworks and methods of question answer-
ing. In Information Science and Control Engineering (ICISCE), 3rd International Conference
(pp. 115–119). New York, NY: IEEE.
Lopez, V., Uren, V., Sabou, M. & Motta, E. (2011). Is question answering fit for the semantic web?
A survey. Semantic Web, 2(2), 125–155.
Mendes, A.C. & Coheur, L. (2011). An approach to answer selection in question-answering based
on semantic relations. In International Joint Conference on Artificial Intelligence (IJCAI)
(pp. 1852–1857).
Mendes, A.C., & Coheur, L. (2013). When the answer comes into question in question-answering:
Survey and open issues. Natural Language Engineering, 19(1), 1–32.
Peng, F., Weischedel, R., Licuanan, A. & Xu, J. (2005). Combining deep linguistics analysis and surface
pattern learning: A hybrid approach to Chinese definitional question answering. In Proceedings of
the Conference on Human Language Technology and Empirical Methods in Natural Language Process-
ing (pp. 307–314). Association for Computational Linguistics.
783
784
1 INTRODUCTION
While reading a document, article or any other text resources, we might struggle with unfa-
miliar words. Unfamiliar words can be regarded as complex words and these kinds of words
make a text difficult to understand. Jargon, technical terminology, and so forth, are more
difficult to understand for the general population. If everyone wrote texts in the simplest
form, every reader could understand them easily. But these kinds of documents are very
rare. Complex words are always a barrier to comprehending a text. If there was a system to
identify these complex words and replace them with simpler alternatives, these kind of under-
standability problems could be rectified easily and the reader could get enough information
from the text; otherwise, it may be left unread. So, lexical simplification systems have been
introduced, which increase the readability of a sentence by identifying complicated terms
and replacing them with a simpler substitute for better understanding.
A typical Lexical Simplification (LS) system consists of four stages. The first stage of
any lexical simplification system is Complex Word Identification (CWI) and this is the most
important stage of all stages in the LS pipeline. In this stage, the system must identify the
complex word; the candidate substitutions for the identified complex word are generated
later. But the task is very difficult to implement because there is no correct definition for a
complex word. We don’t know how to define a complex word. The complexity of a word
differs from person to person. Different people have different vocabularies depending, for
example, on the newspapers they read, their interaction manner, and so forth.
Either explicitly or implicitly, every lexical simplification system will identify a complex
word. Among the different complex word identification methodologies, simplifying every-
thing is the simplest method but it is not the most effective because it will consider all the
words as complex.
The second stage of LS is known as Substitution Generation (SG). After correctly identi-
fying a complex word, the next step is to generate a suitable substitute for that word. The sub-
stitutes have a similar meaning to the complex word but should be simpler. This can be done
with or without considering the context. A method based on context is the extraction of a
substitute word from the sentence-aligned parallel corpora of English Wikipedia and Simple
English Wikipedia. The context and word alignment in both corpora are considered in order
to generate the substitution. WordNet, using SG methods, is purely context-independent. All
thesauri-based approaches are like this.
785
786
2 LEXICAL SIMPLIFICATION
Lexical simplification is the task of making difficult words easier to understand. In the proc-
ess of lexical simplification, SG is the generation of all possible candidate substitution words
for a complex word. Figure 2 shows the different approaches for generating the candidate
substitutions.
787
Table 1. Overview of the compared approaches and datasets used for lexical simplification.
Proposed approaches
Type of
approach Model Dataset Result
789
The degree of simplicity and the preservation of meaning and grammar are the main focuses of
any lexical simplification system. If more substitutions are produced, then the degree of mean-
ing preservation will be reduced and vice-versa. This problem is difficult to solve. The perform-
ance of SG can be improved by increasing the number of thesauri used. A suitable method for
complex word identification is still a problem in LS, because there is no correct definition for a
complex word and the complexity varies according to each individual’s vocabulary. There are
some domain-specific LSs available; it can also be extended to other domains.
4 CONCLUSION
LS systems are used to make a text more accessible. One of the main tasks in LS is substi-
tution generation, which is used to generate the possible complex and simple word pairs.
Several methods are used for generating these pairs. The methods include word-embedding
models, thesauri and sentence alignment. Among these methods, word embedding is the
most widely used. The positives and negatives of the different methods are discussed in this
paper. SG methods have an important role in the preservation of grammar and meaning of
the generated sentence.
REFERENCES
Adel, H. & Schütze, H. (2014). Using mined coreference chains as a resource for a semantic task. In Pro-
ceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),
25–29 October 2014, Doha, Qatar (pp. 1447–1452). Stroudsburg, PA: Association for Computational
Linguistics.
Biran, O., Brody, S. & Elhadad, N. (2011). Putting it simply: A context-aware approach to lexical simpli-
fication. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics:
Human Language Technologies: Short papers (Vol. 2, pp. 496–501). Stroudsburg, PA: Association for
Computational Linguistics.
Bott, S., Rello, L., Drndarevic, B. & Saggion, H. (2012). Can Spanish be simpler? LexSiS: Lexical Sim-
plification for Spanish. In Proceedings of COLING 2012: 24th International Conference on Computa-
tional Linguistics: Technical Papers (pp. 357–374).
Carroll, J., Minnen, G., Canning, Y., Devlin, S. & Tait, J. (1998). Practical simplification of English
newspaper text to assist aphasic readers. In Proceedings of the AAAI-98 Workshop on Integrating
Artificial Intelligence and Assistive Technology (pp. 7–10).
Coster, W. & Kauchak, D. (2011). Learning to simplify sentences using Wikipedia. In Proceedings of the
49th Annual Meeting of the Association for Computational Linguistics, 24 June 2011, Portland, Oregon
(pp. 1–9). Stroudsburg, PA: Association for Computational Linguistics.
Daelemans, W., Höthker, A. & Erik Tjong Kim Sang. (2004). Automatic sentence simplification for
subtitling in Dutch and English. In Proceedings of the 4th International Conference on Language
Resources and Evaluation, Lisbon, Portugal (pp. 1045–1048).
De Belder, J. & Moens, M.F. (2010). Text simplification for children. In Proceedings of the SIGIR Work-
shop on Accessible Search Systems (pp. 19–26).
Deléger, L. & Zweigenbaum, P. (2009). Extracting lay paraphrases of specialized expressions from
monolingual comparable medical corpora. In Proceedings of the 2nd Workshop on Building and Using
Comparable Corpora: From Parallel to Non-Parallel Corpora (pp. 2–10). Stroudsburg, PA: Associa-
tion for Computational Linguistics.
Devlin, S. & Unthank, G. (2006). Helping aphasic people process online information. In Proceedings of
the 8th International ACM SIGACCESS Conference on Computers and Accessibility (pp. 225–226).
New York, NY: Association for Computing Machinery.
Elhadad, N. & Sutaria, K. (2007). Mining a lexicon of technical terms and lay equivalents. In Proceed-
ings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing,
29 June 2007, Prague, Czech Republic (pp. 49–56). Stroudsburg, PA: Association for Computational
Linguistics.
790
791
Fast and efficient kernel machines using random kitchen sink and
ensemble methods
ABSTRACT: The introduction of the information era has affected all areas of computer
science. This has also affected the approach taken by machine learning researchers. Com-
pared to the early days, researchers now use a generic approach to selecting a machine
learning model. As the generic model doesn’t contain much domain knowledge, it has been
compensated for by a huge dataset. This makes the model optimization more complex and
time consuming. We can replace this overhead with randomization instead of optimization.
There are many existing methods which apply randomization in machine learning models.
But all of these methods compromise on accuracy. This paper shows an efficient way to use
randomization along with ensemble methods without a decrease in efficiency.
1 INTRODUCTION
As the entire world is tending toward the Internet of Things and Software Defined Anything,
the volume and velocity of data is exploding. This has led to certain changes in a researcher’s
approach in machine learning area too. In the early days of machine learning, researchers
used complex models. Most of the domain knowledge will be embedded in these models. And
the models will be application dependent. There won’t have been huge datasets available for
optimization purposes at that time. But now, with the help of world wide web, the Internet of
Things and other information era facilities, there is a possibility of getting huge sets of data.
This huge dataset will contain most of the domain knowledge for a particular application.
This has made researchers follow a generic approach to model selection.
Because of the availability of huge datasets, there is no need to embed the complete domain
knowledge into the model. This reduces the development time of models. The generic models
can be reused for similar applications which, in turn, improve a particular type of model as
it has been used for many projects. As the training data contains more accurate features, the
models optimized using this data will be more accurate.
But the huge size of datasets has increased the training time exponentially. If the input
dataset contains N data points, then the training time of a kernel machine will be N2. Also, it
is difficult for storing kernel matrices because of its huge size, which is N × N.
To overcome this overhead, there are many approaches, such as dimension reduction, con-
verting a gram matrix from dense to sparse, decomposition of a gram matrix, random pro-
jection and sampling. But all of these methods lack accuracy because when the quantity of
training data decreases, features obtained from the dataset also decrease.
In order to have accurate and fast training models, methods which decrease training time
and increase accuracy, need to be integrated. This paper proposes a method which uses a
sub-sampling method and an ensemble method together to obtain a fast and efficient kernel
machine.
Section 2 introduces the Random Kitchen Sink (RKS) and ensemble methods. In Section 3
the new methodology is explained. Section 4 analyses the evaluation criteria for the new
model.
793
Random Kitchen Sink (RKS) (see 2.2) and other random approaches (see 2.1) are explained
in detail in this section. Different types of ensemble methods (see 2.3) are also studied.
3 PROPOSED METHOD
1 m
Ej = ∑ I(x i ) − y i
m i=1
(1)
Then the voting share can be calculated with proportions of error rate distributed between
the individual learner and entire learners using Equation 2.
∑
n
Ei
Vj = i =1
(2)
Ej
The training of base learners is done using the kernel which was built by RKS. The steps
of the proposed method are described in algorithm 3.1.
795
796
The error rate of the proposed method depends only on the error rate of each base learner
(Equation 3). The randomization process done during the RKS affects this error. But because
a weighted voting share system is used, the error rate of each learner will be constant. So, the
ultimate error rate of the entire model will be the smallest error rated learner.
E = max E j × V j (3)
j =1…n
This error is less than normal randomization methods which use only one learner. Also,
in the case of time complexity, our model is faster than the classical kernel machine. It takes
only O (dlogd) time, where d is the size of the dataset for each learner which is obtained
by partitioning the entire input space. Here, d << n. As the size of data set for each kernel
machine is smaller, it takes up much less space to store the gram matrix (d × d). Thus, spatial
complexity also decreases.
5 CONCLUSIONS
We proposed a method which combines advantages from both randomization and ensemble
methods. Without losing accuracy, the kernel is trained faster than a normal kernel. The algo-
rithm splits the entire dataset and, for each piece, a base learner is trained using RKS. The
weighted voting share method is integrated with the system in order to decrease the error rate.
The time and space complexity of the problem also decreases due to the randomization process.
REFERENCES
Achlioptas, D., McSherry, F. & Scholkopf, B. (2001). Sampling techniques for kernel methods. In
Advances in Neural Information Processing Systems (335–342), 14.
Aneesh, C., Hisham, P.M., Sachin, K.S., Maya, P. & Soman, K.P. (2015). Variance based offline power
disturbance signal classification using support vector machine and random kitchen sink. Procedia
Technology, 21(21), 163–170.
Beevi, K.S., Madhu, S.N. & Bindu, G.R. (2016). Detection of mitotic nuclei in breast histopathology
images using localized acm and random kitchen sink based classifier. 38th Annual International Con-
ference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2435–2439.
Blum, A. (2006). Random projection, margins, kernels, and feature-selection. SLSFS ’05 Proceedings of
the 2005 International Conference on Subspace, Latent Structure and Feature Selection, 52–68.
Drineas, P. & Mahoney, M.W. (2005). On the Nyström method for approximating a gram matrix for
improved kernel-based learning. Journal of Machine Learning Research, 2153–2175.
Frieze, A., Kannan, R. & Vempala, S. (2004). Fast Monte-Carlo algorithms for finding low-rank
approximations. Journal of the ACM (JACM) 51, 1025–1041.
Huang, W., Yang, Y., Lin, Z., Huang, G.-B., Zhou, J., Duan, Y. & Xion, W. (2014). Random feature
subspace ensemble based extreme learning machine for liver tumor detection and segmentation. 36th
Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 4675–4678.
Laparra, V., Gonzalez, D.M., Tuia, D. & Camps-Valls, G. (2015). Large-scale random features for kernel
regression. IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 17–20.
Lin, L., Wang, F., Xie, X. & Zhong, S. (2017). Random forests-based extreme learning machine ensem-
ble for multi-regime time series prediction. Expert Systems with Applications, 83, 164–176.
Nikhila, H., Sowmya, V. & Soman, K.P. (2015). Comparative analysis of scattering and random features
in hyperspectral image classification. Second International Symposium on Computer Vision and the
Internet (VisionNet15) 58, 307–314.
Nikhila, H. (2015). Hyperspectral image classification using random kitchen sink and regularized least
squares. IEEE International Conference on Communication and Signal Processing (ICCSP).
Platt, J.C. (1999). Using analytic QP and sparseness to speed training of support vector machines. In
Proceedings of the 1998 Conference on Advances in Neural Information (pp. 557–563).
797
798
Saravanan Chandran
Department of Computer Science and Engineering, National Institute of Technology,
Durgapur, West Bengal, India
1 INTRODUCTION
The entire world is now moving towards digitization and is growing fast where printed doc-
uments and manuscripts need preserved in a digital format. Optical Character Recognition
(OCR) is a system by which any document, whether it is handwritten or printed, can be con-
verted into an editable text format. Thus, it helps when searching in documents. Nowadays,
OCR is used by a large number of institutions such as banks, insurance companies, post offices
and book publishers in order to verify the authenticity of the documents, either handwritten
or printed such as cheques, and envelopes. The research and development of OCR is based on
progress in fields such as image processing, pattern recognition and machine learning. There
are two types of OCR systems: offline and online. In an offline character recognition system,
the input image is captured by a scanner and passes through three stages; preprocessing, feature
extraction and classification. Online character recognition systems work in real-time.
The first step in every traditional recognition system is the removal of noise. This is followed
by other preprocessing techniques such as binarization, thinning, resizing. The preprocessed
image is then segmented into lines, words and then characters. Unique features are extracted
from each character and the chosen classifier is trained using these vectors. Compared to
printed documents, handwritten character recognition is more complex since different people
write in different ways and styles. Other challenges in handwritten character recognition are
the variation in the fonts/thickness of letters and the difference in the gaps between the letters.
Moreover, the skewness of handwritten matter will be very different from person to person
(Ryu et al., 2014). To overcome these challenges, researchers are working hard to improve this
system.
This paper has other parts such as insight on previous works in OCR using HOG (Sec. 2),
proposed method (Sec. 3) and conclusion (Sec. 4).
799
Elleuch et al. (2017) have extracted features using a HOG descriptor. The image of each char-
acter is initially divided into small cells. The HOG of each pixel is then computed using a one-
dimensional mask. They have represented a rectangular HOG using three parameters namely,
“number of cells per block, number of pixels per cell, and number of channels per cell”. The
orientation is taken in-between 0 and 180 degrees. They have chosen nine bins to represent orien-
tation. After computing the histogram, normalization is done by L2-norm. They have compared
the performance of HOG features with Gabor features. Gabor features are directly extracted
from gray scale images. Their system was tested using an IFN/ENIT dataset. The error classifi-
cation rate of the HOG descriptor with SVM was 1.51% and that of Gabor features was 7.16%.
Elleuch et al. (2015) examined Arabic handwritten script recognition using a multi-class
SVM. In the recognition phase, structural features are input to a supervised learning algo-
rithm. SVM is used as the classifier and handwritten Arabic characters are used for testing.
Their experimental study has efficiently showed that they have excellent outcomes if com-
pared to the existing Arabic OCR systems.
Ebrahimzadeh and Jampour (2014) have developed an efficient handwritten digit recogni-
tion system using HOG and SVM. The input image is partitioned into 9 × 9 cells and a histo-
gram is computed. Eighty-one features are used to represent each digit. Digit classification is
performed using linear multi-class SVM. To validate the model, a MNIST handwritten digit
dataset is used. They have achieved 97.25% accuracy. Linear SVM yields better accuracy
compared to polynomial, RBF and sigmoid functions.
Kamble and Hegadi (2015) use Rectangular HOG (RHOG) features to recognize hand-
written Marathi characters. Normalization will bring all characters to the same size. After
extracting RHOG features, the feature vector length is 576. A Sobel mask is used to measure
the gradient values. After calculating the gradient and orientation, bins of histograms are
computed. SVM is used as the classifier. The performance is compared with a Feed forward
artificial neural network. The dataset included 8,000 samples and the neural network-based
classification performed better than SVM.
The handwritten character recognition in Kulkarni’s (2017) model uses the HOG features
for character recognition followed by the center of the mass of the image with SVM algo-
rithm. Otsu’s method is used for segmentation. Once regions of interest are obtained, the
mean of the weighted mean of white pixels is calculated. The center of image is assigned this
value. 9-bit integer values are extracted using the HOG descriptor. For character classifica-
tion, the HASY dataset is used which contains handwritten alpha numeric symbols. The
proposed model is evaluated using SVM and KNN.
Qinyunlong (2008) combines HOG in multiple resolutions with canonical correlation anal-
ysis. A Gaussian pyramid is used to get a multi-resolution HOG. Once we have the gradient
map, HOG features can be extracted from these maps for each resolution. In preprocessing,
Box-Cox transformation is applied. The system is tested with three handwritten databases.
Iamsa-at and Horata (2013) The images are converted into gray scale and after preproc-
essing, the resized images are at 32 × 32 pixels. HOG was computed by applying a one-
dimensional mask. Based on the intensity of the characters, the gradient is calculated. Each
pixel casts a vote for the cell that lies closest to its orientation. L2 norm is used to normalize
the histograms of overlapping blocks. The dataset contains Thai and Bangla handwritten
characters. The performance of the feedforward-backpropagation neural network and the
Extreme Learning Machine (ELM) are compared. Eighty hidden layers were used in the
backpropagation neural network and the activation function is logistic. A sigmoid function is
used to train the ELM. Their experimental study shows that DFBNN outperforms ELM.
Tikader and Puhan (2014) have modified the traditional HOG feature for recognizing
English-Bengali scripts. The input image is not divided into cells. After computing gradients,
instead of splitting the cells, binning operation is applied to the whole image. For classifica-
tion, linear SVM is used. The system performance depends on the number of bins chosen
since they are proportional to each other.
800
3 PROPOSED METHOD
Due to the high variety and complexity of Malayalam handwritten strokes, shapes and con-
cavities, we have selected a HOG feature descriptor which is robust to local displacements, yet
still supply discriminating feature vectors as figuration of the handwritten characters. Bhow-
mik et al. (2014) mention “The main idea behind the HOG descriptors is that the local object
appearance and shape within an image can be described by the distribution of intensity or
edge directions.” Then, obtained feature vectors are fed into an SVM classifier to create the
classification. The system works in three steps:
• Preprocessing
• HOG feature extraction
• Classification
In the preprocessing step, we have some basic image processing to separate characters from
real samples or preparing data from dataset and then in the second part, we extract HOG
features which is very distinguishable descriptor for character recognition where we divide an
input image into 99 cells and compute then the histogram of gradient orientations thereby
we represent each character with a vector of 81 features. The overall view of the proposed
approach has been illustrated in Figure 1. HOG is a fast and reliable descriptor which can
generate distinguishable features. Also, SVM is a fast and powerful classifier which is useful
to classify HOG features. The subsequent sections explain the steps in detail.
801
3.2 Preprocessing
3.2.1 Noise removal
When a document is scanned, some noise is unavoidable. Noise may occur due to the quality
of the document, scanner etc. Before processing the image, the noise should be removed. A
low pass filter can be used to remove the noise thereby smoothing the image.
3.2.2 Binarization
Grayscale images are converted to two tone black and white images by the process called
binarization. In order to convert a grayscale image into a binary image, a value is selected as
the threshold and all pixel values above the threshold are converted to 1 and those below the
threshold are converted to 0.
3.3 HOG
HOG was first suggested (Dalal & Triggs, 2005) for detecting the presence of humans in
images. It also has wide applications in computer vision and image processing areas because
of its characteristics such as invariance to illumination and local geometric transformations.
The input image is partitioned into small square cells and then computes the histogram of
gradient directions or edge directions. The orientation of each pixel in a cell is quantized into
bins of histograms. Each bin represents an angle range between 0° and 360° or 0° and 180°.
A feature vector is formed by combining normalized histograms of each cell.
802
G x ( u, v ) = H ∗ I ( u, v ) (1)
and
G y ( u, v ) = H T ∗ I ( u, v ) (2)
G y ( u,v )
θ ( u,v ) = tan −1 (4)
Gx ( u,v )
3.4 Classification
In the third stage, a linear multi-class support vector machine has been employed to clas-
sify characters. The optimization criteria of support vector machines are the marginal width
between two classes. If the width is higher, so the separability of patterns. The patterns that lie
on the soft margin are called support vectors and they characterize the classification function.
4 CONCLUSION
This paper proposes a model for handwritten Malayalam character recognition using histo-
gram of oriented gradients as feature descriptor. HOG is invariant to local geometrical changes
and illumination. SVM is used as the classifier since it helps to achieve robust performance.
REFERENCES
Bhowmik, S., Roushan, M.G., Sarkar, R., Nasipuri, M., Polley, S. & Malakar, S. (2014). Handwrit-
ten Bangla word recognition using HOG descriptor. In Fourth International Conference of Emerging
Applications of Information Technology (EAIT), (pp. 193–197). New York, NY: IEEE.
803
804
ABSTRACT: Malignant or unplanned falsehood can be spread on social media and can
hazardously affect people and society. Models for automated verification of rumors are
already developed. But the impacts of rumors are neither analyzed nor predicted with these
rumor verification models. Impact prediction of rumor can be used for determining whether
the rumor to be responded to or not. This is very relevant for sudden situations in which a
rumor with large negative impact has to be addressed. As an extension to the veracity predic-
tion model, impact prediction is proposed.
1 INTRODUCTION
Social media services have enhanced the way individuals get data and news about current
occasions. Traditional news media just offers information to individuals. But online social
media such as Twitter and Facebook are platforms for people to impart information and
their insights about the news occasions. Users on Twitter create their profile and share their
status or opinions. These posts are known as tweets. Despite the fact that an extensive volume
of content is posted on Twitter, the majority of data is not valid or valuable in giving infor-
mation about the occasion. There can be noise, spam, ads and personal emotions in tweets.
This makes the quality of content on Twitter faulty.
1.2 Motivation
Debunking rumors at an early stage of diffusion is especially significant to limiting their
unsafe impacts. To recognize bits of rumor from truthful occasions, people and organizations
have frequently depended on common sense and investigative journalism. Rumor revealing
sites like snopes.com and factcheck.org are such cooperative endeavors. Be that as it may, on
the grounds that manual confirmation steps are associated with such endeavors, these sites
are not complete in their topical scope and furthermore can have long debunking delay.
In the area of Natural Language Processing (NLP), there is some recent research analyz-
ing and deciding the truth value of social media content. There are several works are already
done to predict the veracity of rumor in Twitter. By predicting the impact of a false rumor
along with its veracity, the rumor can be stopped immediately before it makes further chaos.
805
2 RELATED WORK
There are many works about predicting and analyzing the veracity of rumors. Some of the
major works are discussed here.
Ma et al. (2016) propose a technique for detecting rumors from microblogs with Recurrent
Neural Networks (RNN). It shows a strategy that learns continuous representations of microb-
log events for detecting rumors. The model depends on RNN for learning the hidden representa-
tions that catch the variety of contextual information of important posts after some time. Using
RNN, social context information of an event is demonstrated as a variable-length time series.
At the point when individuals are exposed to a rumor claim, they will forward the claim or com-
ment on it, in this way making a continuous stream of posts. In this method, both the temporal
and textual representations from rumor posts are learned under supervision. The model is also
effective for early detection of rumors, where satisfactory precision could be accomplished.
RumourEval is a SemEval shared task and is for identifying and handling rumors and
reactions to them, in text (Derczynski et al., 2017). Derczynski et al. propose a shared task
where members investigate rumors as cases made in content. In the same task, users react to
each other inside discussions attempting to determine the veracity of the rumor. If one needs
to evaluate the evidence of a rumor, different sources can be used to make a final decision
about the rumor’s veracity. SemEval consists of two sub tasks: (a) classifying rumor tweets
into support, deny or comment, and (b) veracity classification. Sub task A corresponds to
talk around cases to confirm or refute them using crowd response analysis. Sub task B relates
to the AI-hard problem of evaluating a claim. Overall, it is a tedious philosophy.
Real time rumor debunking is the proposal of Liu et al. (2015). It is an efficient procedure
for mining language features like individuals’ opinions, discovering witness accounts and
getting the underlying belief from messages. Use of sourcing, network propagation, cred-
ibility and other user and meta features help to expose rumors. Their contributions include:
(1) approach to automatically debunk rumors on social media; (2) an authentic rumor data-
base constructed on real data, and the process of its creation; and (3) an algorithm for pre-
dicting veracity in real time is potentially faster than human verification.
In Vosoughi et al. (2017) a rumor prediction approach named Rumor Gauge is discussed.
Identification of salient features of rumors on Twitter is done by looking at a few aspects
of diffusion: linguistics and the users involved. Comparison of each aspect with respect to
spreading of true and false rumors is made with Rumor Gauge.
A rumor signature can be formed by extracting the time series from these features. Then
using Hidden Markov Models (HMMs), the rumor signature can be classified as true or false.
This paper suggests an approach to predict the veracity of rumors in reasonable time and
with sufficient accuracy. So, an extension to this approach is possible to predict the impact of
false rumor. This is to resist the spreading of that rumor.
3 METHODOLOGY
This section discusses both veracity prediction and impact prediction. Veracity prediction is
the same as that in the method of Vosoughi et al. (2017).
3.1.1 Linguistic
Features from the text of the rumor are analyzed and those are collectively known as lin-
guistic features. Some linguistic features were identified. Features that significantly con-
tribute to the system are mentioned here. Significance can be found using a chi-square
test.
Ratio of negated tweets: This is the ratio of tweets having negation over the total number
of tweets in a rumor. Negations are detected using the Stanford NLP parser (Chen & Man-
ning, 2014).
Average maturity of tweets: Politeness and elegance considered as maturity of a tweet.
There are five indicators of the maturity of a tweet:
1. Smileys.
2. Abbreviations: Number of abbreviations (such as gn for goodnight, sry for sorry, gbu for
god bless you) present in the tweet.
3. Vulgarity: Number of vulgar words present in a tweet.
4. Word complexity: Length of words in the tweet are considered for checking the maturity
of the tweet.
5. Sentence complexity: Complexity of sentence contributes to the maturity of the tweet.
Ratio of tweets containing opinion and insight: Linguistic Inquiry and Word Count
(LIWC) gives a list of insight and opinion words. Words from the tweet are compared to
words in the category of LIWC dictionary (Pennebaker et al., 2003).
Ratio of uncertain and guessing tweets: Guessing and uncertain words come under another
category of LIWC. This includes words such as perhaps, like, guess, and so on. Each tweet is
checked against the guessing and uncertain words from LIWC.
In Equation 1, p denotes the number of positive reactions and n denotes the number of
negative reactions.
Originality: This is the ratio of the number of original tweets a user has posted to the
number of times the user posted retweets of someone else’s tweet.
Credibility: This checks that the user’s account has been officially verified by Twitter.
Influence: Influence is found by the number of followers of a user.
No of Followers
Role = (2)
No of Followees
T + Rt + Rp + F
Engagement = (3)
AccountAge
No of positive reactions
PPR = (4)
Total no of reactions
No of negative reactions
NPR = (5)
Total no of reactions
3.3 Model
User identity features and linguistic features determine the signature of a rumor. Figure 1
depicts the overview of the proposed model. Some rumor tweets are manually annotated for
training of the model. Hidden Markov Model (HMM) is trained using annotated rumors. If
a new tweet arrives, the model compares the tweet with the stored collection. Then it predicts
the veracity. If the rumor veracity found to be less than 0.2, then it will be considered as fake.
Then the corresponding tweet is taken for impact analysis. A separate HMM is needed to
808
4.1 Dataset
The dataset is available as a single JSON file. It includes 300,002 tweets about the health and
death of Tamilnadu chief minister, Jayalalitha. Every tweet that is associated with the situa-
tion is included in the dataset. For predicting the veracity of rumors, the model can also be
trained with PHEME dataset of rumors. It contains tweets about eight events. For example,
Ferguson unrest in US and the Germanwings plane crash in the French Alps. This dataset is
publicly available (Zubiaga et al., 2016).
5 CONCLUSION
Verification of rumors is a critical task and influential to populations so they can make deci-
sions based on the truth. This paper described a model for the verification of rumors and
prediction of the impacts. As a part of the impact analysis, impact features such as response
rate, profile feature and diffusion rate are identified. The ability to predict the impact of the
rumor along with its impact might be applied in the emergency services. News consumers and
journalists can use the proposed model in their field of work.
REFERENCES
Allport, G.W. & Postman, L.J. (1965). The psychology of rumor. New York, NY: Russell, Russell.
Chen, D. & Manning, C.D. (2014). A fast and accurate dependency parser using neural networks.
(pp. 740–750).
Derczynski, L., Bontcheva, K., M. Liakata, M., R. Procter, R., Hoi, G.W.S & Zubiaga, A. (2017).
Semeval-2017 task 8: Rumoureval: Determining rumour veracity and support for rumours. In SemE-
val@ACL.
Liu, Xiaomo, Nourbakhsh, Armineh, Li, Quanzhi, Fang, Rui, Shah, & Sameena (2015). Real-time
rumor debunking on twitter. Melbourne, Australia.
Ma, Jing, Gao, Wei, Mitra, Prasenjit, Kwon, Sejeong, Jansen, Jim, Wong, Kam-Fai, Cha, & Meeyoung.
(2016). Detecting rumors from microblogs with recurrent neural networks. In The 25th International
Joint Conference on Artificial Intelligence (IJCAI 2016).
Pennebaker, J.W., Mehl, M.R. & Niederhoffer, K.G. (2003). Psychological aspects of natural language
use: Our words, our selves. Annual Review of Psychology, 1, 547–577.
Vosoughi, S., Mohsenvand, M. & Roy, D. (2017). Rumor gauge: Predicting the veracity of rumors on
twitter. ACM Transactions on Knowledge Discovery from Data 4.
Vosoughi, S., Zhou, H. & Roy, D. (2015). Enhanced twitter sentiment classification using contextual
information. pp. 16–24.
Zubiaga, A., Liakata, M. & Tolmie, P. (2016). Pheme rumour scheme dataset: Journalism usecase.
809
ABSTRACT: Object category classification is one of the most difficult tasks in computer
vision because of the large variation in shape, size and other attributes within the same object
class. Also, we need to consider other challenges such as the presence of noise and haze,
occlusion, low illumination conditions, blur and cluttered backgrounds. Due to these facts,
object category classification has gained attention in recent years. Many researchers have
proposed various methods to address object category classification. The main issue lies in the
fact that we need to address the presence of noise and haze which degrades the classification
performance. This work proposes a framework for multiclass object classification for images
containing noise and haze using a deep learning technique. The proposed approach uses an
AlexNet Convolutional Neural Network structure, which requires no feature design stage for
classification since AlexNet extracts robust features automatically. We compare the perform-
ance of our system with object category classification without noise and haze using standard
datasets, Caltech 101 and Caltech 256.
1 INTRODUCTION
Object category classification is the task of classifying an object into its respective class
within a group of object classes. The most important step in this task is the choice of feature
extraction method. Obtaining key features from an image-containing object is a burdensome
task due to variation in attributes like shape, size and color within the same object class.
There exist many conservative feature extraction methods like Scale-Invariant Feature Trans-
form (SIFT) (Lowe, 1999) and Histogram Oriented Gradient (HOG) (Dalal & Triggs, 2005)
features for object recognition. To overcome the limitations of low-level features (Chan et al.,
2015), dictionary learning and deep learning were introduced in which features were learned
from the data itself instead of manually designing features.
In the above specified feature extraction techniques, most of the time is spent in deciding
ideal features for all classes of objects and in the selection of a suitable classification method.
Although these methods provide hopeful results, due to the inability of these methods to cap-
ture most compact and flawless features. Most advanced methods, like deep learning neural
networks, replace them with feature extraction for object classification. The main advantage
of using deep learning architecture is that it learns compact and flawless features automati-
cally. The main disadvantage of these methods is that they require a huge amount of data
and computation power (Hieu Minh et al., 2016).
At the end of 20th century and in first decade of the 21st century, neural networks have
shown satisfying results for the object classification problem (Cireşan et al., 2011; Jia et al.,
2009). But in recent years the old neural network structures have been replaced by new deep
and complex neural network architectures called deep learning neural networks (LeCun
et al., 2015). As specified earlier, the main disadvantage of these deep learning structures
is that they require extensive training with a huge amount of data to capture flawless and
compact features.
811
2 RELATED WORK
Hieu Minh et al. (2016) suggested a technique for feature extraction which combines AlexNet
with a Recursive Neural Network (RNN) known as AlexNet-RNN. The AlexNet is used to
extract optimal features and which is then fed to the RNN. The RNN has a structure similar
to that of an ordinary neural network, the only difference is that it retains the weight learned
for a while.
Jian et al. (2016) used an average and weighted feature extraction method. First the behav-
ior of the features is investigated. In the next step, the most powerful features are chosen and
after that the average of these best features is used for classification. In weighted average
combination some of the features are omitted by assigning a value zero. Finally, all the fea-
tures are integrated into a k-NN framework for classification purpose.
Shengye et al. (2015) used a two-level feature extraction method for image classification. In
the first level, Bag of words (BOW) along with a spatial image pyramid is used to extract first
level features. During the second level, the features from the first level are extracted based on
dense sampling and spatial area sampling. The second technique adapts a multiple kernel
learning that can be used to fuse different image feature to obtain compact features. They
used the Caltech 256 and 15 scene datasets, obtaining an accuracy of 54.7% and 89.32% for
Caltech 256 and 15 scene respectively.
Transfer learning based on a deep learning approach was employed by Ling et al., (2015).
In recent years, deep neural networks have been used to extract the high level, most compact
and selective features from images that can be used for different tasks in computer vision.
As the image features are passed through different layers, the feature learning takes place at
different levels and feature learning in different levels represents different abstractions (Ling
et al., 2015). It is very difficult to apply deep learning techniques to vision problems as it
requires huge labeled data for training.
Foroughi et al. (2017) proposed an object categorization method which has significant
intra-class variation. It uses a joint projection and low-rank dictionary learning method
using dual graph constraints (JP-LRDL). It simultaneously learns a robust projection and a
discriminative dictionary in the low dimensional space. These can handle different types of
variation within the same object class, that raises due to occlusion, changes in viewpoint and
poses, size changes and various shape alterations.
Demir and Guzelis (2016) proposed a method for object recognition using two variations
of CNN. The first CNN included ten layers and the second one was similar to AlexNet
which consisted of nine layers. For feature extraction, the entire image is divided into nine
patches and features were extracted from each patch. The feature extraction method uses the
above specified CNN variants and BOW. The BOW uses SURF as the feature detector and
HOG as the feature descriptor. Finally, the features are supplied to an SVM for classifica-
tion. Existing methods do not consider the presence of various artifacts such as noise and
812
3 PROPOSED FRAMEWORK
In this section, we briefly summarize the essential steps for multiclass image category clas-
sification in the presence of noise and haze using a deep learning technique.
813
the convolution layer. The pooling in a CNN can be either max pooling or average pooling.
For most of the image classification problems, max pooling is found to be more effective than
average pooling. Hence in this paper, we adopt max pooling.
The AlexNet used in this paper consists of one input layer, five convolution layers, three
pooling layers, three fully connected layers, along with two normalization layers, seven Rec-
tified Linear Unit (ReLU) layers, two dropout layers and a softmax and output layer. The
intermediate layers: convolution, pooling, fully connected and ReLU layers form the bulk of
the CNN. The convolution layer performs the convolution operation which is the same as con-
volution in image processing. What it actually does is remembers the pattern learned during
the training process and when a new pattern occurs it tries to label it to the closest pattern. The
pooling layer is used to reduce the feature size. It down samples the feature map by dividing it
into a rectangular pooling region and selecting the maximum value from each region. When
using CNN for image classification, the choice of activation function is very important. Usu-
ally, we can use the hyperbolic tangent function (tanh) or ReLu. In our model, we use ReLu
as the activation function. In this framework, we use a max pooling layer with size 3 × 3 and
stride 2. The first convolution layer uses 96 kernels with size 11 × 11@3, the second uses 256
kernels with size 5 × 5@48, the third and fourth uses 384 kernels with size 3 × 3@256 and the
final one uses 256 kernels with size 3 × 3@192. The first two convolution layers are followed
by a normalization, ReLu and max pooling layer. The next two are only followed by a ReLu
layer. The final convolution layer is followed by one ReLu and pooling layer. After the sixth
and seventh ReLu layers we employed two dropout layers of 50% to avoid overfitting.
3.4 Classification
We can use different classifiers such as k-Nearest Neighbor (k-NN) (Yin & Bo, 2009), Naive
Bayes (Shih-Chung et al., 2015), Hidden Markov Model (HMM) (Jia et al., 2000) and Sup-
port Vector Machine (SVM) (Cortes & Vapnik, 1995).
814
4 CONCLUSION
In this paper, we have proposed an efficient method to classify a large number of object
categories in the presence of noise and haze. Here we use a variant of CNN, AlexNet, for
extracting flawless and compact features.
This approach requires no traditional feature extraction during the training stage as CNN
learns the features automatically, and classification can be performed very efficiently even in
the presence of noise and haze. The output of AlexNet is compact and highly relevant fea-
tures. Finally, we can use simple multiclass classifiers like SVM for effective classification.
Training and testing was conducted on two benchmark datasets Caltech 256 and Caltech
101, with noise and haze. We assume that there will be no considerable changes in the per-
formance even if artifacts like noise and haze are present.
REFERENCES
Alex, K., Ilya, S. & Geoffrey, E.H. (2012). ImageNet classification with deep convolutional neural net-
works. In Pereira, F., Burges, C., Bottou, L., Weinberger, K., (Eds.), Advances in Neural Information
Processing Systems 25 (pp. 1097–1105). Curran Associates, Inc.
Cireşan, D.C., Meier, U., Masci, J., Gambardella, L.M. & Schmidhuber, J. (2011). Flexible, high per-
formance convolutional neural networks for image classification. pp. 1237–1242.
Cortes, C. & V. Vapnik (1995, Sep). Support-vector networks. Machine Learning 20(3), 273–297.
Dalal, N. & Triggs, B. (2005). Histograms of oriented gradients for human detection. IEEE Computer
Society Conference on Computer Vision and Pattern Recognition 1, 886893.
Demir, Y. & Guzelis, C (2016). Moving towards in object recognition with deep learning for autono-
mous driving applications. IEEE Conference Publications.
Foroughi, H., Ray, N. & Zhang, H. (2017). Object classification with joint projection and low-rank
dictionary learning. IEEE Transactions on Image Processing 99, 1–1.
Hailiang, L., Yongqian, H. & Zhijun, Z. (2017). An improved faster r-cnn for same object retrieval.
IEEE Access 5, 13665–13676.
Hieu Minh, B., L. Margaret, L., N. Eva, Cheng. Katrina, & B. Ian S. (2016). Object recognition using deep
convolutional features transformed by a recursive network structure. IEEE Access 4, 10059–10066.
Jia, D., Wei, D., Richard, S., Li-Jia, L., Kai, L. & Li, F.-F. (2009). Image Net: A large-scale hierarchical
image database. IEEE Conference on Computer Vision and Pattern Recognition, 248–255.
Jia, L., N. A, & G.R. M (2000). Image classification by a two-dimensional hidden Markov model. IEEE
Transactions on Signal Processing, 48, 517–533.
Jian, H., Huijun, G., Qi, X. & Naiming, Q. (2016). Feature combination and the knn framework in
object classification. IEEE Transactions on Neural Networks and Learning Systems, 27, 1368–1378.
LeCun, Y., Bengio, Y. & Hinton, G. (2015). Deep learning. Nature, 521, 436444.
Ling, S., Fan, Z. & Xuelong, L. (2015). Transfer learning for visual categorization: A survey. IEEE
Transactions on Neural Networks and Learning Systems, 26, 1019–1034.
Lowe, D.G. (1999). Object recognition from local scale-invariant features. The Proceedings of the Sev-
enth IEEE International Conference, 2, 11501157.
Ming, L. & Xiaolin, H. (2015). Recurrent convolutional neural network for object recognition. IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 3367–3375.
815
816
1 INTRODUCTION
Optical character recognition (OCR) is one of the old and most popular research area.
Researchers are trying to improve the performance of existing methods by introducing new
methods in segmentation, feature extraction and classification. Handwritten character and/
or numeral recognition is a part of OCR where the characters are handwritten in nature.
Unlike printed numeral recognition, recognition of handwritten numerals is very difficult.
This is mainly due to the variation in the appearance of numerals, as different persons have
different writing styles.
Furthermore, handwritten character and/or numeral recognition can be either online or
offline. The online method involves recognition of characters in real-time i.e, the characters
are recognized as soon as they are written, usually the input is coordinate values of the char-
acters. On the other hand offline character recognition system takes input in the form of the
image, which is obtained from scanning, pre-processing and segmenting the document con-
taining the characters. Offline method is more challenging than the online as it involves noise
and other artifacts like document quality, variation in the shape and style of characters etc.
Recently deep learning methods have shown promising performance in areas like object
recognition (Ayegl et al. 2016), object classification (Wanli et al. 2016), automatic number
plate recognition (Menotti, Chiachia, Falco, & Oliveira Neto 2014, Syed Zain, Guang,
Afshin, & Enrique 2017), sentiment analysis in texts (Abdalraouf & Ausif 2017), stock
market prediction (Vargas et al. 2017), automatic speech recognition (Palaz et al. 2015) and
character recognition (Mehrotra et al. 2013). From these we can conclude that nowadays
817
deep learning methods are used in almost all research areas such as Data analytics, natural
language processing (NLP), Object and Image classification and character recognition etc.
Convolutional Neural Network or CNN or ConvNet is a type of deep learning neural net-
work architecture, that is most suitable for image structure representation and visual imagery
analysis. The main advantage of CNN is that it requires minimal amount of pre-processing
(Lecun et al. 1998).
In recent years several variants of Convnet has emerged in the ImageNet challenge, some of
them include Alex Net (Alex et al. 2012), ZF Net (Zeiler, Fergus, Pajdla, Schiele, & Tuytelaars
2014), Google Net (Christian, Wei, Yangqing, Pierre, Scott, Dragomir, Dumitru, Vincent, &
Andrew 2015) and Res Net (He et al. 2015). The main advantage of using deep learning
structure like ConvNet is that we can eliminate the need for manual feature extraction. Deep
neural networks can outperform the conventional methods (P Nair, Ajay, & Chandran 2017).
Malayalam and Kannada are two of the most commonly used languages in South India
and are the official languages of the Kerala and Karnataka state respectively. These two lan-
guages belong to Dravidian language family. Malayalam and Kannada language have their
own numerals and are more complex compared to modern Hindu-Arabic numerals due to
the complex curved nature. Figure 1 shows Malayalam and Kannada Numerals.
To my extent of knowledge, this is the first paper to address Malayalam and Kannada
handwritten numeral recognition using Convolutional Neural Network. We are attempting
to use CNN to achieve better performance in recognizing Malayalam and Kannada hand-
written numerals.
2 LITERATURE SURVEY
Akm and Abdul (2017) used CNN for Handwritten Arabic numeral recognition. They con-
ducted experiment with two models. In the first model, they used they used a variant of multi
layer perceptrons (MLP) with notable changes. To avoid over-fitting, they employed dropout
mechanism. The second is a ConvNet model with two convolution and two max pooling
layer. The first convolution layer includes 30 feature maps each with kernel size 55. The sec-
ond convolution layer include 15 feature maps each with kernel size 3 × 3 pixel. Both the max
pooling layer had a size of 22. In order to avoid over-fitting a dropout of 25% was used. The
accuracy for the first model was 93.8% and that of second model (CNN Model) was 97.4%.
Akhand et al. (2015) used a CNN based architecture for recognizing Bangla handwritten
numerals. The CNN included 2 convolution layer with 5 × 5 kernel and 2 subsampling layer with
size of 2 × 2, which uses average pooling. The first level feature feature maps (6 feature maps)
that obtained after first convolution had a size of 24 × 24. Using the first subsampling layer the
feature maps were downsampled to 12 × 12 and the second convolution produced 12 feature
maps of 8 × 8. Finally, using a second level subsampling it is further downsampled to 4 × 4. No
dropout mechanism were used in this method. The overall accuracy for the system was 97.93%.
Ramadhan et al. (2016) used a similar CNN structure as above for handwritten mathemat-
ical symbol recognition, they used 3 feature map in the first convolution and subsampling
and 5 feature maps in the second convolution and subsampling and unlike average subsam-
pling used above they employed max pooling. The overall accuracy for training and testing
was 93.27% and 87.72%. respectively.
818
3 PROPOSED SYSTEM
The Convolutional Neural Networks need to trained with large image dataset to obtain good
performance. This is the main challenge in any deep learning project. i.e, it requires a large
dataset. However, there are techniques to increase the size of images in the dataset. There is
no open source large dataset available for Malayalam and Kannada numerals. Figure 2 shows
the overall architecture of the proposed system.
The proposed system includes the following steps:
3.2 Pre-processing
The pre-processing step usually involves resizing, normalization and removing unwanted enti-
ties from numeral images, like noise, haze etc. Pre-processing steps helps to ease the numeral
recognition. Firstly, the images are resized to an appropriate size, then it is converted to gray
scale image. After that, the pixels in images are inverted to obtain the negative of the image.
819
This is usually done to reduce the storage size. The image size has great influence in the train-
ing stage. If it is too large we will require increased training time as computation involved is
high. If the size is too small fitting of images into the network becomes difficult, hence it is
820
always good to choose an appropriate size. We can use methods like padding for choosing a
standard size.
3.6 Classification
This is the final stage, in which a softmax function is used to classify the output of the CNN.
The softmax function output falls within the range [0,1] and the sum of the output of all class
is equal to 1. The softmax function classifies the input numeral to a class that has highest
output value. The above system can be evaluated by varying the layers in CNN and also vary-
ing different parameters like learning rate. Backpropagation with gradient descent is most
effective learning rule for image classification problem.
4 CONCLUSION
Handwritten numeral recognition has a large number of applications like ZIP code recogni-
tion (LeCun et al. 1989), recognizing numerals in old documents etc. Traditional methods use
handcrafted features, which requires a great deal of effort and time. This can be eliminated
by introducing new automatic feature learning methods like Convolutional Neural Network,
Deep Belief Network, AutoEncoders etc. These deep learning method has shown outstand-
ing performance in recognizing numerals as well as handwritten characters (Shailesh et al.
2015).
Here we have proposed a bilingual handwritten numeral recognition system for Malayalam
and Kannada numerals. The dataset creation and CNN modeling is very time consuming.
We would provide first ever large open source dataset for Malayalam and Kannada numerals.
To reduce the time required for the training stage graphics processing unit (GPU) support is
used. Also to avoid overfitting in training phase of the CNN dropout mechanism is applied.
CNN have shown better results in recognizing numerals in various scripts, hence has a high
probability that it will provide the same for Malayalam and Kannada numerals.
REFERENCES
Abdalraouf, H. & M. Ausif (2017). Deep learning approach for sentiment analysis of short texts. 3rd
International Conference on Control, Automation and Robotics (ICCAR), 705–710.
Akhand, M.A.H., M. Mahbubar Rahman, P.C. Shill, I. Shahidul, & M.M. Hafizur Rahman (2015).
Bangla handwritten numeral recognition using convolutional neural network. International Confer-
ence on Electrical Engineering and Information & Communication Technology (ICEEiCT2015), 1–5.
Akm, A. & K.T. Abdul (2017). Handwritten arabic numeral recognition using deep learning neural
networks. CoRR abs/1702.04663.
Alex, K., S. Ilya, & E.H. Geoffrey (2012). Imagenet classification with deep convolutional neural net-
works. In Pereira, F., Burges, C., Bottou, L., Weinberger, K., eds.: Advances in Neural Information
Processing Systems 25. Curran Associates, Inc., 10971105.
Ayegl, U., D. Yakup, & G. Cneyt (2016). Moving towards in object recognition with deep learning for
autonomous driving applications. International Symposium on Innovations in Intelligent SysTems and
Applications (INISTA), 1–5.
Christian, S., L. Wei, J. Yangqing, S. Pierre, R. Scott, A. Dragomir, E. Dumitru, V. Vincent, &
R. Andrew (2015). Going deeper with convolutions. IEEE Conference on Computer Vision and Pat-
tern Recognition (CVPR).
He, K., X. Zhang, S. Ren, & J. Sun (2015). Deep residual learning for image recognition. CoRR 7.
LeCun, Y., B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, & L.D. Jackel (1989).
Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551.
Lecun, Y., L. Bottou, Y. Bengio, & P. Haffner (1998). Gradient-based learning applied to document
recognition. 86, 2278–2324.
Md, S., M. Nabeel, & A. Md Anowarul (2016). Bangla handwritten digit recognition using autoen-
coder and deep convolutional neural network. International Workshop on Computational Intelligence
(IWCI), 64–68.
822
823
ABSTRACT: The hybrid control model of SDN is designed to improve network produc-
tivity by reducing the controller load. When the network is under heavy load, flow rules are
installed on a network device by other network devices on behalf of the controller, and in the
case of normal load the control is centralized. Thus the controller does not have to program
flows to each network equipment one by one, instead it can ask the equipment to spread this
flow to other equipment on behalf of the controller. This model is not secure from malicious
applications as all the applications are treated in the same way and there is no way to distin-
guish between a genuine and a malicious application. This paper proposes a permission sys-
tem to which the applications must subscribe on initialization with the controller and before
approving the application commands a permissions check is performed. The priority of the
application is also considered while granting the permission in order to deal with policy con-
flict.This will effectively monitor the working of every application and thus will prevent any
unauthorized operations.
1 INTRODUCTION
The main feature of the SDN architecture is the separation of the control plane from the data
plane. The data plane devices forward packets on the basis of instructions obtained from a
logically centralized controller which also maintains the network state. In traditional networks
a change in the device configuration or the routing strategy would mean the modification of
the firmware of all the involved data plane devices, which would incur high cost. Since SDN
implements the control plane in software, the changes in the routing strategy can be made
from a single point i.e. there is centralization of policies. The forwarding devices no longer
needs to make decisions and are thus less complex allowing the creation of low cost devices.
Open flow is the most widely used protocol for communication between the control and
data plane. In an attempt to efficiently balance the controller load when using open flow, Oth-
man & Okamura (2013a) proposed a hybrid control model of SDN; that allows the regular
centralized control model to be used in the normal situations, but at the same time introduces
a distributed control model in order to ensure proper working of the network in situations
where the controller is under substantial loads and is required to install large number of flows
in to the forwarding devices. In such cases, the hybrid control model can be used to relieve the
controller from doing any further processing to relocate the flows, and enabling it to install
those flows as they are; and relying on the distributed control of the network equipment to
solve any issues of network equipment overloading. And thus, the hybrid control model ena-
bles the smooth working of the network even in cases of overloading.
The SDN architecture provides both advantages and disadvantages to the security of the
network. On one hand, once an attack is detected the global view of the network allows to
take countermeasures much quickly and on the other hand it brings with it additional security
challenges. One of the issues is the attack by malicious applications. Vulnerability created in
the network by granting complete control and visibility of the network to the applications was
discussed in X. Wen & Wang. (2013). The interaction of the applications with the network
825
2 PROBLEM DESCRIPTION
In the hybrid model of SDN, the applications interact with the network via the controller
which provides an abstraction of the data plane elements. The applications can get informa-
tion such as the current state of the network or can change the routing strategy by asking the
hybrid controller to write the flows to various devices. The controller will either spread the
flow by itself or in case of heavy load ask other equipment to spread the flows. Every applica-
tion is treated in the same way and there is no distinction between a genuine application and
a malicious application.
Although the effect of some attacks such as dos will be less severe in this model compared
to the normal SDN, an attacker can control the entire network by writing the intended flow
rules to the network devices via a malicious application. Also there is no control over the type
of operations that genuine applications can perform.
In addition to securing the hybrid model of SDN by preventing malicious applications
from taking control over the network, this work also enforces priority to different applica-
tions to ensure that their operations do not interfere with each other.
3 RELATED WORK
Various security issues and corresponding solutions were discussed in Sandra Scott-Hayward &
Sakir Sezer (2016) The problem of treating every application with the same privilege was first
identified in X. Wen & Wang. (2013). The authors propose PermOF with a set of permissions
which were enforced at the Application Programming Interface (API) entry using an isolation
mechanism. It was successful in solving the app privilege problem and thus securing the net-
work. The concept of the permissions system is extended in S. Scott-Hayward & Sezer (2014). It
is implemented on top of the Floodlight controller. The authors define the set of permissions to
which the application must subscribe on initialization with the controller and introduce an Oper-
ation Check point, in which before approving the application commands a permissions check
is performed. They also used an unauthorized operations log to examine the malicious activity
in order to build a profile for SDN application—layer attacks. Although it discussed about the
problem of application priority enforcement, a solution for that specific issue was not presented.
The issue of policy conflict was discussed in P. Porras & G. Gu (2012). The system uses
the FortNOX enforcement engine, which handles possible conflicts by using authors security
authorization to decide on flow insertion. It checks whether the new flow clashes with the
existing flow rule. If the new flow rule is conflicting with the existing one, it will be installed
only if it is issued by a higher priority author. The need to resolve the proper authorization
level of the flow rule author is a drawback of this method.
Stanford research institute extended the floodlight Controller to develop the Security
enhanced floodlight (SEK). An administrator authorizes applications java class, which is
digitally verified by the SEK at run time. The application has full control over the network
once it is signed and approved.
826
4 DESIGN
A complete set of permissions is defined to which the applications must subscribe on ini-
tialization with the controller.The permission set is similar to that in X. Wen & Wang. (2013)
and it includes all types of permissions an application may need to execute. The permissions
are stored securely with the application IDs linked to the group of permissions allowed to
an application similar to S. Scott-Hayward & Sezer (2014) and the network administra-
tor can add or remove permissions of an application via the user interface of the security
application.
The applications are also given priority in order to ensure that the flow rule installation
of different applications do not interfere with one another. The priorities are stored together
with the permissions and are taken in to account while allowing applications to perform vari-
ous operations. A sample of the permission system is given in the Table 1.
Before granting the subscribed permission for an app, the security application checks if
it interferes with the operation of another application. The permission will only be granted
if the operation of a higher priority application is not affected by it. The applications are
allowed to ask the security app for a time slot for which to secure their operation, which will
be stored by the security app along with the permission set data. If there is a new request
for a permission before the expiration of the timeslot, the permission will be granted only if
it was requested by a higher priority application. The working of the security application is
given in Figure 1.
The security application also allows other applications to request for various permissions
and to know the permissions that are currently granted to them.
A read_topology X
read_all_flow
flow_mod_route
flow_mod_drop
flow_mod_modify_hdr
modify_all_flows
B pkt_in_event Y
flow_removed_event
error_event
topology_event
C flow_mod_modify_hdr Z
modify_all_flows
send_pkt_out
set_device_config
set_flow_priority
827
5 CONCLUSION
The SDN architecture encourages the deployment of third party applications, but it also
introduces the additional task of ensuring which of these are genuine. This paper proposes a
security framework to safeguard the hybrid model of SDN by preventing malicious applica-
tions from taking control over the network. The hybrid model is an interesting attempt to
balance the controller load and once the various security issues are solved, it can be success-
fully used practically.
REFERENCES
Othman, O.M. & K. Okamura (2013a). Hybrid control model for flow-based networks. in the inter-
national conference COMPSAC 2013 – The First IEEE International Workshop on Future Internet
Technologies, Kyoto, Japan.
Othman, O.M. & K. Okamura (2013b). Securing distributed control of software defined networks,. Int.
J. Comput. Sci. Netw. Security 13, 5–14.
Porras, P., S. Shin, V.Y.M.F.M.T. &. G. Gu (2012). A security enforcement kernel for open flow net-
works. in Proceedings of the 1st workshop on Hot topics in SDN. ACM, 121–126.
Sandra Scott-Hayward, Member, I.S.N. & I. Sakir Sezer, Member (2016). A survey of security in soft-
ware defined networks. IEEE COMMUNICATION SURVEYS & TUTORIALS. 18, 623–654.
Scott-Hayward, S., C.K. & S. Sezer (2014). Operation check point: Sdn application control. 22nd IEEE
International Conference on Network Protocols (ICNP). IEEE, 618–623.
Wen, X., Y. Chen, C.H.C.S. & Y. Wang. (2013). Towards a secure controller platform for open flow
applications. in Proceedings of the second ACM SIGCOMM workshop on Hot topics in software
defined networking. ACM, 171–172.
828
ABSTRACT: CAPTCHA stands for Completely Automatic Public Turing test to tell
Computers and Humans Apart. CAPTCHAs are meant to distinguish between humans and
software bots and are used to prevent unauthorized access to Internet resources by the bots.
A CAPTCHA is a type of challenge-response test, in which a test is generated by the com-
puter program that most humans can pass but computer programs cannot. In this paper, a
new CAPTCHA approach is proposed: Human-Intervened CAPTCHA (HI-CAPTCHA).
This HI-CAPTCHA strengthens security in Wireless LAN (WLAN)-based mobile applica-
tions and systems. For example, in WLAN-based classroom applications, it is used for the
identification of the bots as well as for that of genuine users. A genuine user is one who is
authorized to use the WLAN system and is responding from inside the classroom.
1 INTRODUCTION
CAPTCHA stands for Completely Automatic Public Turing test to tell Computers and
Humans Apart. It is a mechanism used to distinguish between human users and computer
programs (von Ahn et al., 2004). The CAPTCHA term was coined in 2000 by Luis von Ahn,
Manuel Blum, Nicholas J. Hopper (all of Carnegie Mellon University), and John Langford
(then of IBM) (von Ahn et al., 2003). It is encountered in one form or another while using
different services, such as Gmail and PayPal, or online banking accounts (Baird & Popat,
2002; Datta et al., 2009). CAPTCHA is basically a challenge-response test that most humans
can pass but computer programs cannot. The rule of thumb of CAPTCHA is that it should
be solved easily by humans but not by software bots. CAPTCHA uses a reverse Turing test
mechanism (Coates et al., 2001). It has the following specifications:
• The judge is a machine rather than a human;
• The goal is that all human users will be recognized and can pass the test whereas no com-
puter program will be able to pass the test.
Thus, CAPTCHA helps in preventing automated and Artificial Intelligence (AI) software
programs known as bots from conducting unlawful activities on web pages, or in stopping
spam attacks on mail accounts. The bots try to automatically register for a large number of
free accounts and use these accounts to send junk email messages or to slow down services
by repeatedly signing in to accounts and causing denial of service. CAPTCHA stops such
autonomous entries and activities by bots in websites or in password-protected accounts.
Therefore, many websites utilize CAPTCHA against web bots. Some of the applications
(Carnegie Mellon University, 2010) of the CAPTCHA are as follows:
• Preventing comment spam in blogs
• Protecting website registration
829
2 RELATED WORK
Several Internet-based CAPTCHA mechanisms that use text, graphics, image, or sound exist
and are used by researchers to prevent bots and enhance security. These may be categorized
broadly into: first, plain CAPTCHA that is used only for the purpose of detecting and dis-
tinguishing humans and bots; second, other CAPTCHA that meets the above purpose and
utilizes the human effort in solving some problems. This paper aims to utilize the human
efforts used in solving the CAPTCHA for authentication purposes.
reCAPTCHA (Figure 1) was developed by Luis von Ahn, Ben Maurer, Colin McMillen,
David Abraham and Manuel Blum at Carnegie Mellon University (CMU). It was acquired by
Google in September 2009 (von Ahn et al., 2009). They studied harnessing the time and energy
of the human being, which is required for solving CAPTCHA. Every time the CAPTCHA
is solved, the effort used is utilized in digitalizing books, annotating images, and building
machine-learning data sets. It performs multiple tasks, that is, it identifies a bot, tries to solve
AI problems, and helps in machine learning. For example, it helps in utilizing the human effort
for digitalizing books in the following manner. The user is provided with two texts: the first
text is recognized by OCR while the other one is not. The user has to enter both texts and thus
helps in the machine-learning process by entering the non-identified word to the database.
The proposed HI-CAPTCHA system is also designed to utilize human effort to authen-
ticate users in the WLAN environment and to differentiate between a bot and human. It is
also designed to differentiate between genuine and invalid users/humans. The latter category
of users is not permitted to use/work with the system.
830
831
4 HI-CAPTCHA RESULTS
The proposed HI-CAPTCHA system is developed on the Android platform. The main soft-
ware used for the development of the proposed system are Pivotal tc Server, Spring Tool
Suite and Android Studio 2.0. It is assumed that the user outside the room/class is not able
to see/perceive things inside the room/class. In HI-CAPTCHA, the administrator (teacher)
selects one question from the list of questions. The questions are designed in such a way that
the answer to the questions vary according to the administrator’s requirement; for example,
‘What is the color of the administrator’s shirt?’ The answer changes according to the color of
administrator’s shirt and the administrator can wear any color shirt. Hence, the answer is not
fixed and bots and outsiders are not able to answer the questions. Thus, the system remains
secure against bot attack and users positioned outside the room.
The proposed HI-CAPTCHA system is designed to differentiate between the genuine user and
the invalid user. For example, in mobile-based examination systems all users outside the class are
invalid users. A genuine user is one who is inside the classroom and authorized to use the system.
In the prototype of the system implemented (Figure 3a), the administrator has three
options to select: automated CAPTCHA, HI-CAPTCHA or an administrator task. If only
bots are to be restricted, plain automated CAPTCHA is selected. If invalid users are to be
restricted, HI-CAPTCHA is selected. The ‘administrator task’ option is used for chang-
ing passwords, registering users, and so on. Once HI-CAPTCHA is selected, other screens
(Figures 3b and 3c) are flashed up that ask for the setting up of questions and their answers
by the administrator in the current session. The duration in which a client’s response is
required is also selected (Figure 3d) depending upon network characteristics such as delay or
bandwidth. After finalizing the questions, answers and duration, the administrator sends the
question to the client to answer (Figure 3e). The client view of the HI-CAPTCHA is shown
in Figure 3f. It is used by the client to answer the administrator’s question correctly.
During the system run, all users who were in the class are able to see the administrator and
hence are able to answer the questions. Users outside the classroom were not able to see the
administrator and therefore they were not able to answer the questions and, therefore, were
identified as invalid users.
The proposed HI-CAPTCHA system works very well and detects bots and invalid users.
During the test, users outside the classroom were identified as invalid users. In Figure 3f, the
question selected by the administrator is: ‘In which direction is the administrator standing?’
The answer to the question depends on the direction of the administrator. All the valid users
in the class were able to see the administrator so they were able to answer correctly, while the
invalid user was not able to answer correctly.
*The HI-CAPTCHA system permits the administrator to design new and different questions as per
needs.
**The direction coordinates for this question may be decided by the instructor in the class and the stu-
dents briefed in advance.
832
HI-CAPTCHA is used for the identification of genuine users. A genuine user in the system is
one who is authorized to use the system. For example, in the mobile-based online examination sys-
tem, all the users inside the class are genuine users, while the users outside the class are not genuine.
The following test case is conducted to prove the result of the HI-CAPTCHA:
• The administrator selects the HI-CAPTCHA to authenticate the genuine user.
• A question selected by administrator is ‘Is the administrator moving or standing?’
• Answer to the question is based on the current time, situation or location of the adminis-
trator—administrator is standing.
• The time selected by administrator is 1 minute. This is the maximum time the user has to
answer the question.
All users inside the class answered correctly, as they can see the administrator. But a user outside
the classroom was not able to answer correctly because he was not able to see the movement of the
administrator. Hence, the administrator was able to identify the genuine users. Figure 3 shows the
diagrammatical view of the HI-CAPTCHA test case, for identifying a genuine user.
All the users inside the class were genuine users as they were authorized to use the pro-
posed system. However, a case may arise in which a user inside the class is not connected to
the server. In such a case, the user is not able to utilize the services of the system and will be
considered as an unauthorized user. There are a few situations in which this may occur:
833
5 CONCLUSION
Consistency of style is very important. Note the spacing, punctuation and capitals in all of
the examples above.
CAPTCHA is a very effective way for stopping bots and reducing spam. CAPTCHA keeps
web data secure from intruders. Almost every website contains CAPTCHA in one form or
other. Every ‘Sign in and sign up’ or form submission over the Internet contains CAPTCHA.
As AI is evolving, the need to develop new and advanced forms of CAPTCHA has arrived.
Google’s reCAPTCHA is an example of an advanced CAPTCHA. It not only detects soft-
ware bots, but also helps in the machine-learning process. In the same way, the proposed
CAPTCHA system is also designed for the multi-tasking environment. The proposed HI-
CAPTCHA not only differentiates between a bot and a human but also differentiates between
genuine and invalid users/humans. The proposed HI-CAPTCHA system can be designed for
WLAN-based mobile attendance systems, e-polling systems or for generating the details of
users. It works very well in a local server environment. In future, it may also be tested along
with new CAPTCHA variants such as NO CAPTCHA and reCAPTCHA (Google, 2017).
REFERENCES
Azad, S. & Jain, K. (2013). CAPTCHA: Attacks and weaknesses against OCR technology. Global Jour-
nal of Computer Science and Technology, 13(3).
Baird, H.S. & Popat, K. (2002). Human interactive proofs and document image analysis. In D. Lopresti,
J. Hu & R. Kashi (Eds.), Document analysis systems V. DAS 2002. Lecture notes in computer science
(Vol. 2423, pp. 507–518). Berlin, Germany: Springer.
Carnegie Mellon University. (2010). CAPTCHA: Telling humans and computers apart automatically.
Retrieved from http://www.captcha.net.
Coates, A.L., Baird, H.S. & Faternan, R.J. (2001). Pessimal print: A reverse Turing test. In Proceedings
of 6th International Conference on Document Analysis and Recognition, Seattle, WA (pp. 1154–1158).
New York, NY: IEEE.
Datta, R., Jia, L. & Wang, J.Z. (2009). Exploiting the human-machine gap in image recognition for
designing CAPTCHAs. IEEE Transactions on Information Forensics and Security, 4(3), 504–518.
Google. (2017). Introducing the new reCaptcha! Retrieved from https://www.google.com/recaptcha/
intro/index.html.
von Ahn, L., Blum, M., Hopper, N.J. & Langford, J. (2003). CAPTCHA: Using hard AI problems for
security. In E. Biham (Ed.), Advances in cryptology—EUROCRYPT 2003. Lecture notes in computer
science (Vol. 2656, pp. 294–311). Berlin, Germany: Springer.
von Ahn, L., Blum, M. & Langford, J. (2004). Telling humans and computers apart automatically. Com-
munications of the ACM, 47(2), 57–60.
von Ahn, L., Maurer, B., McMillen, C., Abraham, D. & Blum, M. (2009). reCAPTCHA: Human-based
character recognition via web security measures. Science, 321(5895), 1465–1468.
834
ABSTRACT: This paper introduces Semantic Role Labeling (SRL) method for Semantic
Identification (SI) and Conceptual Graph techniques (CG) for semantic representation. In
Semantic Role Labeling, the semantic roles have been identified as agent, patient etc using
the Karaka theory. The performance of SRL is calculated using Yet Another Chunk Annota-
tor, a Support Vector Machine based algorithm which has shown significant improvement
over earlier methods. The semantic representation introduced here is a directed graph which
shows the relation between concepts and semantic roles.
1 INTRODUCTION
Semantics is the study of linguistic utterance. It refers to the sentence level meaning which is
context independent and purely linguistic. The semantic approaches introduced has manifold
applications in various NLP areas such as question answering system, machine translation
system, text summarization system etc. Semantic identification and representation is a very
important issue in natural language processing.
A sentence is represented by predicates and its corresponding arguments. Predicate repre-
sents an event and abstract roles are semantic roles that the arguments of a predicate can take
in an event. For meaning identification and representation, we must identify the predicates
and the abstract roles in a sentence. For high level understanding many question types need
to be dealt with such as who did, what, to whom. Semantic Role Labeling, a shallow semantic
parsing approach tries to find answers to these questions (Jurasfky & Martin 2002). Thus it
takes the preliminary steps in extracting meaning from a sentence by giving generic labels or
roles to the tokens of the text (Guilda. & Jurasky 2002).
The Semantic Role labeling is implemented (Kadri et al. 2003) using SVM classifier. The
results evaluated using both hand-corrected TreeBank syntactic parses, and actual parses
from the Charniak parser shows a precision and recall rate of 75.8% and 71.4% for this
work. The semantic role labeling based on syntactic chunks is presented in (Kadri & Wayn
2003) shows some inspirational results with a precision and recall rate of 76.8% and 73.2%
respectively.
Semantic role labeling methods in Malayalam is presented in (Dhanya, P.M 2010, Jisha &
Satheesh 2016) and roles identified are used for general concept understanding and concept
representation (Radhika & Reghuraj 2009). Plagiarism detection in Malayalam document
based on extracting the Semantic roles and computing their similarity (Sindhu & Suman
mary 2015) is another work related to Semantic role labeling.
The conceptual graph representation proposed by Sowa (Sowa 2008) express the mean-
ing of a sentence in logically precise, humanly readable and computationally tractable form.
In CG, we have to contemplate about the concepts and their corresponding semantic rela-
tions. This technique has been applied to many real life objects including text (Paola et al.
2008). The CG is used in relation extraction, information extraction and many other concept
extraction techniques (Montes et al. 2001). Fact Extraction is a part of more general prob-
lem in knowledge extraction from text (Yi Wan et al. 2014). The fact extraction from natural
835
2 METHODOLOGY
Overall system architecture is given in Figure 1. For syntactic level processing the main steps
are tokenization, POS tagging and Morphological analysis. In semantic side, the main steps
are semantic role labeling and intermediate representation.
2.1 Tokenization
Tokenization is the task of chopping a character sequence into pieces, called tokens, perhaps
at the same time throwing certain characters, such as punctuation. For tokenization we devel-
oped an algorithm and for compound word splitting SVM machine learning techniques used.
It classifies the tokens into two groups compound words and simple words.
837
6. Destroy Verb ( , )
7. Vehicle Motion Verb ( , )
8. Weather Verb ( )
Noun class
1. Animals ( )
2. Audible only ( )
3. Birds ( )
4. Buildings ( )
5. Collective humans ( )
6. Concepts ( )
7. Positions ( )
8. Person names ( )
9. Non-living ( )
Next we discussed some examples about how we can identify semantic roles with the help
of karaka, vibhakthi and semantic property.
1. Nirdeshika Nirdeshika + verb (emotional verb) The semantic role is
experiencer.
Here the sentence is subject + object + verb form and if the verb is causative verb then
the noun with nirdeshika vibakthi take the role as Agent. shows prathigrahika
vibakthi and the role is patient.
2. Udeshika Udeshika + verb Here the verb is beneficiary verb then role
is beneficiary or recipient.
3. Nirdeshika + nirdeshika + verb
Here the first noun has the semantic property person name and second is fruit name, in
that case person takes the role of agent and the other takes patient.
838
839
Noun
AGENT1
AGENT2
PATIENT1 NIL
PATIENT2
Adjectives
NIL
Adverbs
NIL
Postpositions
NIL
Verbs
MAIN VERB
SUB VERB
840
3 RESULTS
In this experiment the sentence word length is minimum two and maximum five. We have
taken the complex sentence with maximum two verbs. In our study the evaluation is based on
the standard metrics such precision and recall.
The precision for a class is the number of true positives divided by the total number of ele-
ments labelled as belonging to the positive class (i.e. the sum of true positives and false posi-
tives, which are items incorrectly labelled as belonging to the class). Recall in this context is
defined as the number of true positives divided by the total number of elements that actually
belong to the positive class (i.e. the sum of true positives and false negatives). The precision,
recall rate corresponding to each role is given in Table 3.
The result of semantic role labeling and semantic representation in the form of graph is
given below
Input sentence:
Semantic roles:
AGENT1
APPEARANCE
PATIENT1
MAIN VERB
Graph
842
The SVM based classifier introduced here considered the Semantic Role Labeling (SRL) as a
semantic grouping problem of different words. The SRL identified the semantic roles associ-
ated with each words in a sentence. The Conceptual Graph presented here shows the relation
between predicates and arguments in a sentence. The CG representation depends on the
semantic role labeling. If the roles are identified correctly, then it will lead to more accurate
system. By introducing more semantic roles and by increasing the sentence word length we
can add improvements in the present study.
REFERENCES
Anoop, V.S. & Ashraf. 2017. Extracting Conceptual Relationships and Inducing Concept Lattices from
Unstructured Text. Journals of intelligent systems.
Archana, S.M. & Vahad, Naima. & Rekha, Thankappan. & Raseek. C. 2015. A Rule Based Question
Answering System in Malayalam corpus Using Vibhakthi and POS Tag Analysis. International Con-
ference on Emerging Trends in Engineering, Science and Technology (ICETEST – 2015).
Bharati, Akshar. & Vineeth, Chaithanya. & Sangal, Rajeev. Natural Language Processing: A Paninian
Perspective. Prentice-Hall of India, New Delhi.
Bogatyrev, Mikhail. 2017. Fact extraction from natural language with conceptual modelling, Springer
international Publishing AG.
Dhanya, P.M. 2010. Semantic Role labeling Methods: A Comparative Study. In: proceedings of National
Conference on Human Computer Interaction and Image Processing. Vidya Academy of Science &
Technology.
Guilda & Jurasky, Daniel. 2002. Automatic Labelling of semantic roles. 245–288.
Gildea. & Hockenmaier. 2003. Identifying Semantic Roles Using Combinatory Categorical Grammar.
2003. In Proceedings of the 2003 conference on Empirical methods in natural language processing.
Association for Computational Linguistics, Philadelphia, USA, pp. 57–64, 2002.
Kadri, Hacioglu. & Pradhan, Sameer. & Wayne, Ward. & Martin, James. & Jurasfky, Daniel. 2003. Shal-
low Semantic Parsing Using Support Vector Machines. CSLR Tech. Report.
Kadri, Hacioglu. & Wayne, Ward. 2003. Target word detection and semantic role chunking using sup-
port vector machines. In Proceedings of HLT-NAACL 2003.
Jish, Jayan. P. & Kumar, Satheesh. 2016. Semantic role labeling for Malayalam. IJCTA. pp. 4725–4731
© International Science Press.
Jurasfky, Daniel. & Martin, James. 2002. An introduction to Natural Language Processing: Computa-
tional Linguistics and Speech Recognition. Pearson Education.
Manu, Madhavan. & Reghu, Raj. P.C. 2012. Application of Karaka Relations in Natural Language
Generation. In Proc. of National Conference on Indian Language Computing, CUSAT, Kerala.
Montes, Y. Gomez. M. & Gelbukh, A. & Lopez-Lopez & Baeza-Yates. Text mining with conceptual
graphs. 2001. IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man
for Cybernetics in Cyberspace (Cat. No. 01CH37236) Year: 2001, Volume: 2.
Paola, Velardi. & Maria, Teresa. Pazienza. & Mario De’ Giovanetti. 1998. Conceptual graphs for the
analysis and generation of sentence. IBM journal of Research and Development.
Radhika, K.T. & Reghuraj, P.C. 2009. Semantic role extraction and General concept understanding in
Malayalam using Paninian Grammar. International Journal of Engineering Research and Develop-
ment, vol 9, issue 3.
Sindhu, L. & Suman Mary, Idicula. 2015. SRL based Plagiarism Detection system for Malayalam Doc-
uments. International Journal of Computer Sciences issues, vol 12, November 2015.
Sowa. John. 2008. Conceptual Graph. Chapter 5 of the Handbook of Knowledge Representation, ed.
by F. van Harmelen & V. Lifschitz, & Porter. B, Elsevier, 2008, pp. 213–237.
Stephen, chu. & Branko, Cesnik. 2001. Knowledge representation and retrieval using conceptual graphs
and free text document self-organisation techniques, International Journal of Medical Informatics 62
(2001) 121–133.
Yi, Wan. & Tingting, He & Xinhui, Tu. 2014. Conceptual graph based text classification. IEEE Interna-
tional Conference on Progress in Informatics and Computing 2014.
843
ABSTRACT: The web contains very large amount of unstructured text. The process of
converting unstructured text to structured with semantic information annotation can provide
useful summaries for both humans and machines. The semantic relation is one of the most
important parts of semantic information. Hence, extracting semantic relations held between
entities in a text is important in many natural language understanding applications like ques-
tion answering, conversational agents, summarization. There are several proposed methods
in this area. Hand built patterns, bootstrapping methods, supervised methods, unsupervised
methods, and Distant Supervision (DS) are examples. The purpose of this work is to review
various methods used for Relation Extraction (RE). For each approach, respective motiva-
tion is discussed and the merits and demerits compared. Discussion about various datasets is
also included in this paper.
1 INTRODUCTION
The inputs to an RE system are a POS-tagged corpus, C, a Knowledge Base (KB), K, and a
predefined relation type set, R. An entity mention is a token span, denoted as e and r(e1,e2)
is the relation (binary) between entities e1 and e2. The manual labeling of training corpora
is expensive in terms of time. The method of Distant Supervision (DS) can overcome this
problem and it is the reason why DS has achieved significant importance in RE tasks.
DS generates training data by automatic alignment of text with a KB. It can jointly extract
entities together with their relations with minimal or no human supervision. Hence, KB is
also an input to the RE system.
PROBLEM DEFINITION: Given a POS-tagged corpus, C, a KB, K, and predefined rela-
tion type set, R, the RE task aims to 1) detect entities from C, 2) generate training data D with
KB, K, and 3) estimate a relation type r which belongs to R U {None}.
Current RE tasks with DS face the following limitations when handling a joint extraction task:
• Domain Restriction: Most methods (Mintz et al., 2009; Takamatsu et al., 2012) heavily
rely on pre-trained named entity recognizers which are typically designed for general types
such as person or organization. We need further manual work to deal with domain specific
names.
• Error propagation: Error can be propagated from upper components to lower components
and dependencies among tasks are ignored in most existing methods.
• Domain-independent systems: A major challenge is to design a domain-independent sys-
tem. Most existing methods are domain dependent.
• Label Noise: Mapping of relations in the text with KB relations may produce false labels
in training corpora. It may cause uncertainty in DS and thereby results in inaccurate
models.
RE is the extraction of semantic relations from unstructured text. Once extracted, such
structured information is used in many ways. For example, as primitives in IE, building
extending KBs and ontologies, question answering systems, semantic search, machine read-
ing, knowledge harvesting, paraphrasing, and building thesauri.
In this section, we briefly explain three methods employed for the problem of RE. The two
main frameworks used for this are the pipelined framework and the joint learning frame-
work. Also, some neural network-based methods for extracting entities and relations are
explained for understanding the tagging approaches used in this area.
846
847
Proposed methods
Pipelined Mintz et al., DS-logistic, allow corpora of any size Wikipedia with Freebase
2009 (9)
Takamatsu et al., DS, reducing wrong labels Wikipedia with Freebase
2012 (14)
Surdeanu et al., DS, multi-instance NYT, KBP and Freebase
2012 (13) multi-learning algorithms
Joint Li and Ji, DS, exploits global features ACE 2005
2014 (7)
Hoffmann et al., MIML, sentence-level predictions, Wikipedia with Freebase
2011 (5) reducing false labels
Ren et al., DS, CoType, modeling type association, NYT, Wiki-KBP, Bioinfer
2017 (10) mention feature-co-occurrence, and
entity-relation cross-constraints
Tagging Lample et al., LSTM-CRF, transition-based CoNIL 2002, CoNIL 2003
Based 2016 (6) approach, IOBES tagging
Zheng et al., DS, a novel tagging scheme, NYT and Freebase
2017 (17) Bi-LSTM-LSTM
In the method by Li and Ji (2014) their experiments were done on an ACE 2005 data-
set. It contains data from six different domains such as Newswire, Broadcast Conversation,
Broadcast News, Telephone Speech, Usenet Newsgroups and Weblogs. Different datasets
were used by Lample et al. (2016); CoNIL 2002 and CoNIL 2003 which contain independent
named entity labels for English, Spanish, German and Dutch. Since different methodologies
used different datasets, a general comparison is not possible. Contributions of the different
methodologies discussed and domains or datasets used by them are summarized in Table 1.
4 FUTURE DIRECTIONS
5 CONCLUSIONS
Semantic relations in text are the key meaning component for natural language applications.
There are several proposed approaches for the RE task and here we have given a brief summary
of some of them. So far, we have reviewed some important aspects of the entity RE problem
starting with the problem formulation, discussing the different challenges and applications
and finally culminating with a discussion of some important approaches. Various methods
of pipelined approaches, joint extraction framework and neural network approaches, which
use end-to-end tagging schemes have been discussed, and we can clearly say that end-to-end
849
REFERENCES
Bollacker, K., Evans, C., Paritosh, P., Sturge, T. & Taylor, J. (2008). Freebase: a collaboratively created
graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD Interna-
tional Conference on Management of Data (pp. 1247−1250). Association for Computing Machinery.
Gormley, M.R., Yu, M. & Dredze, M. (2015). Improved relation extraction with feature-rich composi-
tional embedding models. arXiv preprint arXiv:1505.02419.
Gupta, R. & Sarawagi, S. (2011). Joint training for open-domain extraction on the web: exploiting over-
lap when supervision is limited. In Proceedings of the fourth ACM International Conference on Web
Search and Data Mining (pp. 217−226). Association for Computing Machinery.
Hochreiter, S. & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8),
1735−1780.
Hoffmann, R., Zhang, C., Ling, X., Zettlemoyer, L. & Weld, D.S. (2011). Knowledge-based weak super-
vision for information extraction of overlapping relations. In Proceedings of the 49thAnnual Meeting
of the Association for Computational Linguistics: Human Language Technologies-Vol 1 (pp. 541−550).
Association for Computational Linguistics.
Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K. & Dyer, C. (2016). Neural architectures
for named entity recognition. arXiv preprint arXiv:1603.01360.
Li, Q. & Ji, H. (2014). Incremental joint extraction of entity mentions and relations. In ACL (1)
(pp. 402−412).
Min, B., Grishman, R., Wan, L., Wang, C. & Gondek, D. (2013). Distant supervision for relation extrac-
tion with an incomplete knowledge base. In HLT-NAACL (pp. 777−782).
Mintz, M., Bills, S., Snow, R. & Jurafsky, D. (2009). Distant supervision for relation extraction with-
out labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL
and the 4th International Joint Conference on Natural Language Processing of the AFNLP, Vol 2
(pp. 1003−1011). Association for Computational Linguistics.
Ren, X., Wu, Z., He, W., Qu, M., Voss, C.R., Ji, H., ... & Han, J. (2017, April). CoType: Joint extraction
of typed entities and relations with knowledge bases. In Proceedings of the 26th International Con-
ference on World Wide Web (pp. 1015−1024). International World Wide Web Conferences Steering
Committee.
Riedel, S., Yao, L. & McCallum, A. (2010). Modeling relations and their mentions without labeled text.
Machine Learning and Knowledge Discovery in Databases, 148−163.
Ritter, A., Zettlemoyer, L. & Etzioni, O. (2013). Modeling missing data in distant supervision for infor-
mation extraction. Transactions of the Association for Computational Linguistics, 1, 367−378.
Surdeanu, M., Tibshirani, J., Nallapati, R. & Manning, C.D. (2012). Multi-instance multilabel learning
for relation extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural
Language Processing and Computational Natural Language Learning (pp. 455−465). Association for
Computational Linguistics.
Takamatsu, S., Sato, I. & Nakagawa, H. (2012). Reducing wrong labels in distant supervision for rela-
tion extraction. In Proceedings of the 50th Annual Meeting of the Association for Computational
Linguistics: Long Papers-Vol 1 (pp. 721−729). Association for Computational Linguistics.
Wang, C., Fan, J., Kalyanpur, A. & Gondek, D. (2011, ). Relation extraction with relation topics. In Pro-
ceedings of the Conference on Empirical Methods in Natural Language Processing (pp. 1426−1436).
Association for Computational Linguistics.
Xu, W., Hoffmann, R., Zhao, L. & Grishman, R. (2013, ). Filling knowledge base gaps for distant super-
vision of relation extraction. In ACL (2) (pp. 665−670).
Zheng, S., Wang, F., Bao, H., Hao, Y., Zhou, P. & Xu, B. (2017). Joint extraction of entities and relations
based on a novel tagging scheme. arXiv preprint arXiv:1706.05075.
Zheng, S., Xu, J., Zhou, P., Bao, H., Qi, Z. & Xu, B. (2016). A neural network framework correlation
extraction: Learning entity semantic and relation pattern. Knowledge-Based Systems, 114, 12−23.
850
ABSTRACT: This work addresses decoupling control of TRMS. The modeling of TRMS
is done through identification algorithm using real-time input-output data. A decoupling
technique is proposed on basis of Relative Gain Array (RGA) to eliminate the cross coupling
effect. The stabilization of TRMS is achieved with PID controllers, the parameter range
of which is obtained with Kharitonov stability criteria. PSO method is further tested for
parameters tuning within range obtained earlier. The performance of the controller is tested
in simulation and responses are found satisfactory.
1 INTRODUCTION
The TRMS has two control inputs with two outputs, respectively, as pitch (ψ) and yaw (ϕ)
with significant cross coupling between them [1]. The designing of the controller is always
a challenging task due to strong coupling effect and highly nonlinear characteristics of the
TRMS.
In [2] authors explain method of identification for TRMS. The non-linear least squares
identification method is applied for calibration. In [3] identification is performed by neural
network approaches for TRMS. The different methods of identification and control tech-
nique which are frequently applied in MIMO system is explained in [5, 6]. The relative gain
analysis is used for the pairing analysis of the MIMO system [7] and the decoupling tech-
nique is applied to eliminate the cross coupling effect of MIMO system.
The aim of the present work is to design PID controller to control the decoupled TRMS
plant. First the identification of the TRMS is done and the transfer function is obtained.
The decoupling of the TRMS is performed to eliminate the cross coupling effect of the
TRMS and this decoupling is validated by the RGA analysis and the simulation results. The
PID controller is designed on the basis of Kharitonov theorem through which a robust PID
controller range is obtained. Further PSO is employed for the fine tuning of the controller
parameters. The remainder is as next section describes the identification process of TRMS
followed by the decoupling technique in section 3. In section 4 range of PID parameters are
determined using Kharitonov theorem while in section 5 PSO method is implemented to
obtain the optimum value of PID parameter. Section 6 depicts the simulation results and at
last conclusion.
The objective of identification process to find the transfer function of TRMS. As there exist
cross-coupling in two rotors of TRMS, it is considered as two linear rotor models with two
linear couplings in-between. Therefore four linear models have to be identified as u1 to y1 and
u2 to y2 u2 to y1 and u1 to y2 as shown in Figure 1. An experiment for model identification is
carried out with the help of the MATLAB Toolbox using chosen identification models for
four transfer functions [4–6]. The TRMS model and the experimental setup are excited with
851
same input excitation and their responses are recorded. The excitation signal contains differ-
ent sinusoids. To estimate accurate model error is minimized between the chosen model and
the actual plant output. The optimal model parameters, for which the square of the error is
minimal (“least mean square (LMS)” method) is considered as “identified model” [7]. The
identified model of TRMS is obtained as,
G11 ( s ) G12 ( s )
G (s) = (1)
G21 ( s ) G22 ( s )
0.01657 s 2 + 0.4194 s + 2.454 0.04986 s .
s 3 + 1.487 s 2 + 4.403s + 5.449 2
. s + 0 0962
.
G (s) = (2)
. . . s + s02 2377
+ s + 4 s902+
20 02248s + 0 4527
. . 0s.0009881
3
+ s 2 + 0.03361s + 0.4065
s + 0 4099s + 0 2181 1.345 0.4568 0.3826
3 DECOUPLING OF TRMS
Here a decoupler is designed for TRMS based on generalized decoupling technique to rectify
the coupling effect associated with the plant. In the generalized decoupling technique the design
of decoupler for any square plant G(s) is based on the formula as described by equation 3. The
RGA method is applied to investigate the pairing analysis of the plant and it is verified that the
decoupled plant GN(s) shown by Figure 2 is perfectly decoupled as described by equation (7).
GD ( s ) = GI (0 ) * GR ( s ) (3)
where, G(0) = steady state gain matrix of G(s), GD(s) = Decoupling matrix of G(s),
GI(0) = inverse matrix of G(0) & GR(s) = Diagonal matrix of G(s). Considering G11,G22 only,
the diagonal form of the plant.
Figure 3. Output response of decoupled plant when step input is applied to yaw rotor while zero input
is applied to pitch rotor.
Figure 4. Output response of decoupled plant when step input is applied to pitch rotor while zero
input is applied to yaw rotor.
Now, the relative gain array of the above matrix is calculated as described by equation (7).
It verifies that upon decoupling the each output of the plant solely depends on the individual
single reference input.
0.4457 0.0018 1 0
RGA(GN (0 )) = RGA(G (0 ) * D(0 )) = = (7)
−0.0116 1.0196 0 1
It is also verified by simulation results as shown in Fig. 3 when step is tested to port 2 of
the decoupler (Fig. 2) and zero input is tested to port 1. The corresponding yaw output shows
step response as shown in Fig. 3 while pitch shows zero response signifying that decoupler
completely nullifies the coupling effect on output. Similar situation arises when a step is
tested to port 1 of the decoupler and zero input to port 2 as result of which corresponding
pitch output shows step response while yaw shows zero response shown in Fig. 4.
Let assume the set δ(s) of real polynomials of degree n of the form as
853
Ki
H (s) = K p + + Kd s (10)
s
Now the four interval polynomial has been obtained from the above equation where the
range of K p , K i & K d are ( K p− K p+ ),( K i− K i+ )& ( K d− K d+ ) respectively.
In order to satisfy the condition of stability for the above four intervals polynomial the
range of PID controller parameter is obtained as in case of the main rotor are Kp = [0.1−1],
Ki = [0.1−1] & Kd = [0.5−2]. Similarly, by adopting the same procedure the range of PID
controller paramet. obtained for tail rotor is as Kp = [0.1−2], Ki = [0.1−0.5] & Kd = [2−5].
The PSO algorithm is described by flow chart in Figure 5. The velocity and position formula
is calculated here is as below.
Vidn+ 1 = Vidn + c1rand ().( Pidn − X idn ) + c2rand ().( Pgdn − X idn ) (13)
854
The convergence characteristic of main and tail rotor is shown in Figures 6 and 7 respectively.
855
6 SIMULATION RESULTS
The parameter value of the PID controller is individually determined for both rotors using
PSO technique within the ranges as obtained by Kharitonov criteria. The step response and
the control signal are exhibited by Figure 8(a) and 8(b) and 9(a) and 9(b), respectively, for
main and tail rotors.
7 CONCLUSION
This paper designs a decoupler for TRMS based on RGA technique in order to nullify the
cross-coupling effect of the MIMO system. Then two PID controllers derived based on the
Kharitonov stability theorem have been tuned using PSO technique. The accuracy of this
method is established by results. To further validate the simulation results, the proposed tech-
nique will be implemented in the real time system of TRMS at NIT Durgapur Advanced
Control laboratory.
REFERENCES
[1] TRMS 33–949S User Manual. Feedback Instruments Ltd., East Sussex, U.K., 2006.
[2] D. Rotondo, F. Nejjari, V. Puig (2013) Quasi-LPV modeling, identification and control of a twin
rotor MIMO system, Control Engineering Practice, vol. 21, iss. 6, pp. 829–846.
[3] B. Subudhi & D. Jena (2009) Nonlinear system identification of a twin rotor MIMO system, TEN-
CON 2009 IEEE Region 10 Conference.
[4] M. A. Hossain, A. A. M. Madkour, K. P. Dahal & H. Yu, (2004) Intelligent active vibration control
for a flexible beam system, Proceedings of the IEEE SMC UK-RI Chapter Conference, London-
derry, U.K.
[5] I. Z. Mat Darus & Z. A. Lokaman (2010) Dynamic modeling of Twin Rotor Multi System in hori-
zontal motion, Journal Mekanikal, no. 31, pp. 17–29.
[6] I. Z. Mat Darus (2004) Soft computing active adaptive vibration control of flexible structures, Ph.D.
Thesis, Department of Automatic Control and System Engineering, University of Sheffield.
[7] A. Rahideh, M.H. Saheed & H.J.C. Huijberts (2008) Dynamic modeling of the TRMS using Ana-
lytical and Empirical approaches”, Control Engineering Practice, vol. 16, no. 3, pp. 241–259.
[8] Z. L.Gaing (2004) A Particle Swarm Optimization Approach for optimum Design of PID Control-
ler in AVR System, IEEE transcations on Energy Conversion, vol. 19, no. 2, pp. 384–391.
856
1 INTRODUCTION
Face recognition is the method or technique of detecting and recognizing a person from an
image. A popular approach is to extract features from an image and use these to match with
other images.
It has many applications in security, identification systems, and surveillance. For example,
the FBI has a program to include face recognition along with other biometrics to retrieve
records from its database. We also find everyday uses of face recognition like authentication
mechanisms in mobile devices.
Face recognition can be traced back to the 1960s. The objective then, was to select a small
set of images from a large database that could possibly contain an image to be matched.
Since then, we’ve come a long way. Now, features extracted from a single image can give us a
good idea about the image. We’ve been able to overcome hurdles like change in illumination,
change in facial expressions, and motion of the head. Also, it is relatively easier to obtain
datasets today.
We have modified the existing ORB detector-descriptor to Root ORB in the hope of
improving accuracy and decreasing computational time. We have used the Bag of Words
model before classification to create image histograms. We have then used machine learning
classifiers to classify the images in the dataset. We have conducted a comparative study of
Root SIFT versus Root ORB. ORB was used as it is very fast and open source.
2 TECHNICAL BACKGROUND
In face recognition, faces in the image are identified and represented by features and descrip-
tors extracted from the image. This involves two steps: Feature detection and Description.
In detection, points of interest are determined. In description, attributes of these points
are ascertained and stored in a vector. This vector is then used for applications like image
classification. The computational efficiency of this task depends on the feature detector-
descriptor algorithms we use, and the learning algorithms we apply.
857
2.1 SIFT
The scale-invariant feature transform (SIFT) is an algorithm to extract and characterize local
features in images. SIFT is scale and rotation invariant, and is therefore capable of delivering
promising accuracies. It extracts the key-points within the image. These key points are then
used for comparison, classification and matching. It performs the following functions to
extract the key points. First, it identifies the interest points on the image by using Gaussian
Difference. Then, the location and scale for these interest points are determined. Next, an ori-
entation is assigned to each of these interest points. Finally, gradients are measured around
these points and the image is transformed to minimize distortion.
2.3 ORB
The ORB algorithm makes use of the FAST detector and BRIEF descriptor. First, FAST is
applied to obtain all key points. Then, the Harris corner measure is applied and the best key
points among them are found. ORB makes use of the BRIEF descriptor. Although BRIEF
performs poorly on rotation, ORB stabilizes it by using the generated key points.
2.4 FAST
FAST detector uses circles to classify key points as corner points. To check whether a pixel
is a corner point, we compare the intensity of the pixel with the intensities of the circle of
points around it. If they are considerably brighter or darker, we can flag this pixel as a corner
point. FAST is renowned for its high computational efficiency. It also performs efficiently
when used with machine learning algorithms.
858
3 PROPOSED METHOD
In subsequent paragraphs, we discuss in detail the methods used in our proposed system.
After L1 normalization is done to all the ORB vectors, element wise square root is taken
for all these vectors.
In the feature map space, calculating the Euclidean distance is analogous to calculating the
Hellinger distance in the original space as:
x ′T y ′ = H ( x , y ) (2)
Where x’ is the ORB vector after L1 normalization and computing element wise square
root.
Key points are obtained for the images after Root ORB is applied. Root ORB uses a 32-
dimensional vector to store the descriptors obtained and also uses the FAST detector. Due to
this, the time required for classification is much lesser than compared to Root SIFT.
859
2 xi yi
k ( x, y ) = ∑ (3)
i xi + yi
( )
2
1 xi − µ y
P ( xi | y ) = exp − (4)
2πσ y
2
2σ y2
3.6 Pipeline
After Root ORB was applied to the dataset, the descriptors obtained were clustered using
the bag of words model. Then, the model was separated into a training set and a testing set.
Finally, we studied the results obtained from each of the three classifiers we used – K-NN,
Naïve Bayes, SVM. These results were then tabulated. This was done for the Faces95 and
Grimace datasets.
860
4 DATASETS
5 RESULTS
All our programs were executed on Intel i7 core, using Windows10. The packages which were
used are Opencv3 (3.2.0), scikit-learn (0.18.1), numpy (1.11.0), joblib (0.11) and inbuilt pack-
ages in python which include OS, time, random and math. Python3.5 was used.
First, we present the results for Root SIFT when applied with K-NN, Naïve Bayes, and
SVM classification algorithm:
1. When Root SIFT is used on Grimace database with the following machine learning algo-
rithms applied independently:
2. When Root SIFT is used on Faces95 database with the following machine learning algo-
rithms applied independently:
3. When Root ORB is used on Grimace database with the following machine learning algo-
rithms applied independently:
4. When Root ORB is used on Faces95 database with the following machine learning algo-
rithms applied independently:
861
6 CONCLUSIONS
The Faces95 dataset poses the challenges of large head scale variation and illumination
changes, and the Grimace dataset focusses on large variation in expression and the motion
of the face. From our results, we see that the accuracy of our method is high in all our test
cases. Hence, we can say that our method overcomes the above hurdles.
From Table 2, we notice that the K-NN Classifier has performed much slower as com-
pared to the other classifiers. Upon closer observation, we see that it performs the slowest
in all the cases. This could be attribute to the fact that finding the nearest neighbor in a high
dimensionality space requires a lot of time.
From the results, we notice that the SVM Classifier is the most accurate classifier. This
could be due to the Additive Chi Squared Kernel which applies Fourier Transforms at regular
intervals to the dataset.
When we compare Tables 1 and 2 with Tables 3 and 4 respectively, we observe that the
accuracy of the Root ORB method is about the same, or sometimes greater than the accuracy
of the Root SIFT method. However, when the computational times are compared, the Root
ORB method is significantly faster than the Root SIFT method.
862
REFERENCES
Arandjelović, Relja, and Andrew Zisserman. “Three things everyone should know to improve object
retrieval.” In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2911–
2918. IEEE, 2012.
Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. “Surf: Speeded up robust features.” Computer
vision–ECCV 2006(2006): 404–417.
Calonder, Michael, Vincent Lepetit, Christoph Strecha, and Pascal Fua. “Brief: Binary robust inde-
pendent elementary features.” Computer Vision–ECCV 2010 (2010): 778–792.
Faces95 Dataset—http://cswww.essex.ac.uk/mv/allfaces/faces95.html.
Grimace dataset—http://cswww.essex.ac.uk/mv/allfaces/grimace.html.
Guo, Gongde, Hui Wang, David Bell, Yaxin Bi, and Kieran Greer. “KNN model-based approach in
classification.” In CoopIS/DOA/ODBASE, vol. 2003, pp. 986–996. 2003.
Hearst, Marti A., Susan T. Dumais, Edgar Osuna, John Platt, and Bernhard Scholkopf. “Support vec-
tor machines.” IEEE Intelligent Systems and their applications 13, no. 4 (1998): 18–28.
Lewis, David D. “Naive (Bayes) at forty: The independence assumption in information retrieval.”
In European conference on machine learning, pp. 4–15. Springer, Berlin, Heidelberg, 1998.
Lowe, David G. “Distinctive image features from scale-invariant keypoints.” International journal of
computer vision60, no. 2 (2004): 91–110.
Maji, Subhransu, Alexander C. Berg, and Jitendra Malik. “Classification using intersection kernel sup-
port vector machines is efficient.” In Computer Vision and Pattern Recognition, 2008. CVPR 2008.
IEEE Conference on, pp. 1–8. IEEE, 2008.
Rosten, Edward, and Tom Drummond. “Machine learning for high-speed corner detection.” Computer
Vision–ECCV 2006(2006): 430–443.
Rublee, Ethan, Vincent Rabaud, Kurt Konolige, and Gary Bradski. “ORB: An efficient alterna-
tive to SIFT or SURF.” In Computer Vision (ICCV), 2011 IEEE international conference on,
pp. 2564–2571. IEEE, 2011.
863
A. Vinay, Abhijay Gupta, Harsh Garg, Shreyas Bhat, K.N. Balasubramanya Murthy &
S. Natarajan
Center for Pattern Recognition and Machine Intelligence, PES University, Bangalore, India
ABSTRACT: Face recognition is transforming the way people are interacting with
machines. Earlier it was used in specific domains like law enforcement but with extensive
research being done in this field, it is being extended to various applications like automatic
face tagging in social media, surveillance systems in airports, theaters and so on. Local fea-
ture detection and description is gaining a lot of significance in the face recognition commu-
nity. Extensive research on SURF and SIFT descriptors have found widespread application.
Key points matched by SURF and triangulated using Delaunay Triangulation boosts the
interest points detected. Other modern techniques like machine learning and deep learn-
ing require huge amount of training data and computational capabilities which sometime
becomes a limitation of its usage. In contrast, hand-crafted models like SURF, SIFT over-
comes the requirement of training data and computing power. The pipeline proposed in the
paper reduces the average computational power and increases accuracy.
1 INTRODUCTION
One of the most challenging problem faced by the computer vision community is Face Rec-
ognition. Decades of work in this field has made face recognition system almost as smart
as human beings. Face Recognition has been increasingly applied in surveillance systems
and authentication services in order to prevent loss of sensitive information and curb secu-
rity breaches leading to loss of money. Apart from these applications, Face Recognition is
also extensively being used by social networking sites such as Facebook to tag friends. Tech
giants such as Google, Microsoft use face recognition for image based search and to provide
authentication services respectively.
Although face recognition has been effectively used in a number of applications, its per-
formance tend to decline when the image shows significant variations in pose, expression,
scale, illumination and translation. Most of the real world problems that require face recogni-
tion possesses these variations. Computer Vision enthusiasts are trying to make this system
invariant to all these challenges. To overcome the problems associated with pose, expres-
sion, scale and upto some extent illumination, we propose a robust model which performs
well when experimented with datasets which contain images variant to above mentioned
constraints.
In any face recognition system, the most crucial step is to locate interest points in an image.
A vast variety of keypoint detectors and feature descriptors have been extensively used by
researchers in literature (eg. Bay et al. (2008), Lindeberg (1998), Lowe (2004)). In recent
years, considerable amount of work has been done on Speeded-Up Robust Features (SURF)
which is built upon previous works (eg. SIFT) to speed up and incorporate invariance in scale
and in-plane rotation. The proposed model combines several algorithms and mathematical
functions to boost the robustness and veracity of the system.
865
Delaunay Triangulation find its use in finger print identification, logo recognition and other
object recognition applications (e.g Miri and Shiri (2012)). In Bebis et al. (1999) the Delaunay
Triangulation is computed using ridge endings and ridge bifurcation minutiae represented
in form of its coordinates. This index-based new approach achieves average accuracies of
86.56%, 93.16%, 94.12% for 3, 5, 7 imprints per person on testing set of size 210, 150 and 90
respectively. It characterizes better index selectivity, low storage requirements, minimal index-
ing requirements and fast identification for fingerprint recognition. Kalantidis et al. (2011)
uses a novel discriminative triangle representation using multi-scale Delaunay Triangulation.
Indexing is done using inverted file structure for robust logo recognition. On adding 4k dis-
tractor classes to the Flickr dataset the performance of the proposed multi-scale Delaunay
Triangulation approach drops by 5.5% as compared to the base line Bag of Words model.
SIFT (Geng and Jiang (2009a), Geng and Jiang (2009b)) introduced enhancements like
Partial-Descriptor-SIFT, Key-Points-Preserving-SIFT and Volume-SIFT (VSIFT) by keep-
ing all the initial keypoints, preserving the interest points on a large scale or near face bound-
aries and by removing unreliable keypoints based on their volume respectively. They reduce
the error rate by 4.6% and 3.1% in KPSIFT and PDSIFT on the AR dataset. SIFT in com-
bination with a bag of words model has been deployed in (Sampath et al. (2016)) to detect
household objects. Modern techniques such as Convolutional Neural Network is outper-
formed by a hybrid model (Al-Shabi et al. (2016))) using SIFT.
The effectiveness of combining Delaunay Triangulation and SIFT has been demonstrated
in Dou and Li (2012). With the increase in number of viewpoints, the accuracy of SIFT algo-
rithm decreases. To overcome this, Delaunay Triangulation (DelTri) exploits the overlapped
regions in different images. Due to uniqueness of the DelTri, the overlapped region of an
image pair overcomes the changes which were generated by different viewpoints at certain
degree, thus increases the correct ratio of the result matches from 58.8% (SIFT+RANSAC)
to 88.2% (SIFT+DelTri). This combination has also been used in Liu et al. (2017) for face
alignment along with Convolutional Neural Networks.
Recent descriptors such as SURF is improvised upon SIFT by restricting the total number
of reproducible orientation through the use of information from circular area around the
interest points. After this step, construction of square region is performed. Further, the
descriptors are extracted from these regions. SURF is used in various object tracking algo-
rithms (Shuo et al. (2012)), where interest points in the defined object are matched between
consecutive frames by obtaining the Euclidean distance between their descriptor.
3 METHOD PROPOSED
A robust approach is proposed with better results by combining Bilateral Filters for image
smoothing, SURF to detect facial keypoints, PCA to minimize the number of key points,
FLANN and Delaunay Triangulation for face matching in the dataset. The main steps of
our model is depicted in Fig. 1.
866
In which c(ξ, x) evaluates the geometric closeness between the neighboring center x and
a nearby point ξ. s(g(ξ), g(x)). evaluates the photometric similarity between the pixel at the
neighborhood center x and that of a nearby point ξ.
3.2 SURF
Speeded-up robust features is invariant to in-plane rotation, contrast, scale and brightness.
It comprises of keypoint point detector which interpolates the highly discriminative facial
points. Further the descriptor extracts the features of the keypoint by constructing feature
vectors. To minimize computation time SURF employs fast Hessian Matrix approximation
and the scale space is examined by up scaling the integral image based filter sizes to detect
interest points.
Given a point a = (x, y) in an image I, the Hessian matrix H(a; σ) in at scale is defined as
follows:
where Gxx(a, σ), Gxy(a, σ) and Gyy(a, σ) are the convolutions of the Gaussian second order
partial derivatives with the image I in point a respectively.
To minimize computation time, Gaussian is approximated as a set of box filters which
denote the lower scale to compute the blob response maps which are represented by Dxx(a, σ),
Dxy(a, σ) and Dyy(a, σ). The Hessian Matrix is estimated as:
where ω represents the weight for the energy conservation between the actual and the approx-
imated Gaussian kernels.
Interest points in an image are found at varied scales, where implementation of scale space
is done through an image pyramid. Gaussian smoothing and sub-sampling are used to gener-
ate the pyramid labels.
The SURF descriptor uses the following methodology to find features in an image:
Step I: Setting a reproducible orientation through the use of information from the circular
area around the derived keypoint.
Step II: A square region is constructed, according to the chosen orientation.
867
Ai = ( ∑ d x ,∑ d _ x, ∑ d y , ∑ d _ y ) (5)
where dx and dy are HaarWavelet responses in horizontal and vertical directions respectively.
The descriptor emphasizes on the spatial distribution of gradient information inside the
interest point neighborhood.
Figure 2. (a) applying bilateral filtering to the input image-, (b) finding key-points using SURF-PCA,
(c) applying Delaunay triangulation in (b), (d) matching faces with the dataset.
868
3.4 FLANN
Once facial key points are found, a matching algorithm is used to search in the dataset.
There exist various approximating methods for computing nearest neighbors like brute force
search, Support Vector Machines, FLANN (Muja and Lowe (2009)). They are typically used
for computing the top k-nearest neighbors efficiently ignoring the rank of dataset. To com-
pute short list of nearest neighbors we use FLANN implementation of k-d tree algorithm.
Rather than considering all neighbors in a Rank-Order clustering as given below:
Oa ( b )
d ( a, b ) = ∑ O ( f (i ))
i =1
b a (6)
where fa(i) is the ith face in the neighbor list of a, and Ob(fa(i)) gives the rank of face fa(i) in
face bs neighbor list.
We only use sum of the top k-neighbors where their presence or absence on the short list
is considered more significant than the numerical rank. A distance measure, by summing the
presence or absence of shared nearest neighbors rather than ranks is given by the following
distance function:
min(Oa ( b ),k )
d m ( a, b ) = ∑i =1
I b (Ob ( fa (i )), k ) (7)
xh x
yh = H y (9)
k 1
869
where (x′, y′) ⇔ (x, y) are pixel-point correspondences and H is Homography transformation
matrix.
The symmetric transfer error is calculated using Euclidean distance and transformation
matrix, d(x, H−1x′)2 + d(x′, Hx)2, for matching keypoints in a pair of image. The inlier’s having
values lesser than threshold are counted.
4 DATASET
To test the robustness and effectiveness of the proposed model we use the FACES95,
FACES96, the ORL Database of Faces and GRIMACE which show variance in pose, expres-
sion, rotation and illumination altogether.
• FACES95 contain 1440 images of 72 individuals; 20 images each. The data set was con-
structed by taking snapshots of 72 individual with a delay of 0.5 seconds between suc-
cessive frames in the sequence. Significant head movement variations was introduced
between images of the same individual. Similar to the above methodology FACES96 was
constructed for 3040 images.
• GRIMACE consist 360 images of 18 individuals. The images taken are variant to scale,
lighting and position of face in the image. In addition the subject made grimaces after
moving his/her head which gets extreme towards the end of the sequence.
• The ORL Database contain images of 40 individuals with total number of images equal to
400. The pictures were taken at different time, light conditions, facial expression and facial
details. The database was utilized in a face recognition project carried out in an association
with the Speech, Vision and Robotics Group of the Cambridge University Engineering
Department.
We executed the proposed model over every group of images present in the four benchmark
databases, namely, FACES95, FACES96, GRIMACE and ORL. The results obtained using
our technique on the four datasets are tabulated in Table 1. The table compares accuracy of
our model over different datasets. The retained variance for PCA is set to 0.8 which implies
20% of the total variance is retained. Threshold for min-Hessian value is set to 565 which
accepts the most salient keypoints. The ORL database which is variant to pose, expression
and scale performed well using our method. The other three data sets which in addition to
pose, expression and scale are also variant to illumination which perform moderately low
on our proposed approach as compared to ORL. So the proposed model fails for illumina-
tion and hence should not be used if they are in varied lightning conditions. The usage of
FLANN in our model also increases the speed of matching the images from the database.
REFERENCES
Al-Shabi, M., W.P. Cheah, & T. Connie (2016). Facial expression recognition using a hybrid cnn-sift
aggregator. arXiv preprint arXiv:1608.02833.
870
871
1 INTRODUCTION
In SDN, the control plane is decoupled from the data plane. It removes the control plane
from the hardware and moved to a set of controllers. The control plane takes the decision
about where and how to forward packets. The small networks need only a centralized con-
troller to work properly. But when the size of the network increases, a single controller is not
873
ci
pi = (1)
cimax
Literature on control
plane Opportunities Challenges
(Ksentini et al. 2016) Optimizing the performance of control Controller placement problem
plane
(Galich et al., 2017) The total control traffic latency can be Finding the location of
reduced by reducing either switch controllers and switches
or controller latency
(Han et al., 2016) Minimize the control latency with Considering hundreds of
optimal number of controllers topologies
(Zhou et al., 2017) Balancing among controllers Proper arrangement of
controllers and switches
874
The data plane is the part of the network that carries user traffic. The data plane enables data
transfer to and from clients, handling multiple conversations through multiple protocols, and
manages conversations with remote peers. Data plane traffic travels through routers. It uses
forwarding devices for the processing and delivery of packets.
Bozakov and Rizk (2013) propose that in SDN applications switches with various capaci-
ties for control message processing cause unpredictable delays due to their concurrent opera-
tion. This methodology uses a queuing model to characterize the service of a switch’s control
interface to concentrate on this issue. To improve predictability in terms of expected delays,
it implements the control connection to the switch and also enables applications to easily
adapt to the control message processing rates. To transmit controller messages to a specific
switch over an established control connection controller interface typically implemented as
function, send to switch. Therefore, the developer can make sure that a control message has
been accepted by the interface or not. And if a switch is operated over its processing limits,
instead of allowing the messages to queue up at the switch, the socket will get blocked at the
application level. As a result, the application may adjust its sending rate much quicker. The
current SDN abstraction model is maintained which is the main benefit of this methodology.
Pontarelli et al. (2017) present a method to implement complex tasks into stateful SDN pro-
grammable data planes. The presented method proposes to use the internal microcontroller,
typically used to configure the programmable data plane, to also perform some complex oper-
ations that do not require to be executed on each packet. These operations can be executed
on a set of data gathered by the data plane and processed in a time scale that is much higher
than the time window of a packet, but is much less than time scale needed for an external SDN
controller. Moreover, the use of the configuration microcontroller instead of an external SDN
controller to avoid the exchange of data on the control links permits a fine grain tuning of
the operations to perform from the timing point of view. Extending the original stateless SDN
data plane paradigm enables the execution of simple control functions directly into the fast
path. In this paper, a different approach is to extend stateful SDN data plane with the support
of a set of lazy operations. The term lazy refers to a set of operations that are triggered by the
reception of a packet inside the pipeline, but return its result after a certain amount of time
without blocking the forwarding of the packet itself inside the switch.
Zhang et al. (2014) proposed Big Switch abstraction as a specification mechanism for
high-level network behavior. This specification allows the operator to define end-to-end flow
policies. This can be used for placing rules on individual switches by the network operating
system. This is forced to do so by the limited capacity of the Ternary Content Addressable
Memories (TCAMs) used for rules in each switch. Using a centralized rule management sys-
tem, it is compiled down to individual rules on different switches. Permitted (PERMIT rule)
or dropped (DROP rule) are the rules related to each packet. For placing rules on switches
the authors proposed a solution based on Integer Linear Programming (ILP). While optimiz-
ing the number of rules and maintaining the switch capacity constraints, this can be applied
on a given firewall policy. Switch priority, capacity, and policy constraints are satisfied by
ILP. And this also optimizes certain objective functions such as minimizing the total number
of rules. But complex rule placement constraints such as monitor certain packets, that do
not want to let firewall rules to block packets before they reach the monitoring rules is not
supported by this concept.
875
Literature on
data plane Opportunities Challenges
(Pontarelli Because almost all pro grammable Data To implement complex tasks into
et al., 2017) planes already haves an internal a stateful SDN programmable
micro-controller used to configure the data plane.
memory tables, method proposed in
the paper could be widely applied.
(Zhang et al., Applicable to real-sized net works. The meaning of the original
2014) policies have to maintained
by the results obtained from
this method.
(Bozakov & Extension of this method can manage Device hetero—geneity as an
Rizk, 2013) more complex control net work inherent property of SDN
topologies and distributed controllers. which must be considered.
From the above literature it is observed that the key challenges in SDN data plane are
implementation of complex tasks into a SDN programmable data plan and sharing of rules
across different paths. Proper handling of these problems leads to better opportunities. These
observations are shown in Table 2.
The NFV concept was introduced to make the network more flexible and simpler than the
traditional network concept which is dependent on hardware constraints (Lopez, 2014). NFV
is about providing network functions by software rather than using a number of hardware.
Most importantly, the benefit of NFV is the flexibility to easily, rapidly, and dynamically
provision and instantiate new services in various locations.
SDN is another network paradigm. The combination of both SDN and NFV started a new
era of networks. The combination of SDN principles and NFV infrastructure can change
the way of building, deploying and controlling network applications, which is described by
King et al. (2015). To provide flexibility, scale, distribution and bandwidth to match the user
demands, flexible and dynamic optical resources are used in the current transport networks.
Due to the requirement of significant engineering resources, these are non-real-time capabili-
ties and often lack the flexibility for dynamic scenarios. To overcome these challenges, Daniel
et al. propose a new architecture with the combination of SDN, NFV, flexi-grid, and ABNO
(Application-Based Network Operations). This new architecture has the capability to deploy
a vCDN (virtualized Content Distribution Network) which is capable of scaling the user
bandwidth demand and control resources programmatically.
Another combination of SDN and NFV with IoT is proposed by Ojo et al. (2016). A typi-
cal Io-T architecture has a 3-layered structure, which are the perception layer, the network
layer and the application layer. By adopting the SDN concept in this Io-T architecture the
network layer is divided into two planes; control plane and data plane. The data layer com-
prises SDN routers and switches to forward packets. These switches and routers in the data
layer are programmatically controlled in the control layer. By enabling NFV on this new
SDN adopted IoT architecture, the network agility and network efficiency of IoT applica-
tions increases. The IoT—gateway becomes dynamic, scalable and elastic by virtualization.
Virtualization makes the infrastructure more flexible and sustainable by decoupling the net-
work control and management function from the hardware.
The future internet scenario named virtual presence, by leveraging on the joint SDN/NFV
paradigm, together with a fog computing approach is proposed by Faraci and Lombardo
(2017). Virtual presence is achieved by exporting a real hardware or software resource, that
876
(King et al., 2015) Capable to respond to high bandwidth real NFV environment is
time and predicted video stream demands more dynamic than
the traditional one
(Ojo et al., 2016) SDN and NFV concepts help to solve new Security issues of NFV
challenges of IoT
(Faraci & Smart device sharing service could increase Having toco exist in
Lombardo, 2017) the resource utilization a cloud in tegrated
environment
5 CONCLUSION
The paper mainly focuses on the research opportunities and challenges in SDN and NFV.
From conducting the literature survey on the control plane and the data plane in SDN, iden-
tified research opportunities and challenges are summarized. NFV provides how network
functions can be done using software rather than hardware. It is observed that a combination
of NFV and SDN is really a promising technology to solve many of the complex networking
issues in the existing network architectures.
REFERENCES
Akyildiz, I. F., Lee, A., Wang, P., Luo, M. & W. Chou, W. (2014). A roadmap for traffic engineering in
SDN-Openflow networks. Computer Networks, 71, 1−30.
Aljuhani, A. & Alharbi, T (2017). Virtualized network functions security attacks and vulnerabilities.
In IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC) (pp. 1–4).
New York, NY: IEEE.
877
878
ABSTRACT: Graphs play a major role in modeling many real-world problems. Due to the
availability of huge data, the graph processing in serial environment become more complex.
Thus, fast and efficient algorithms which work effectively utilizing the modern technologies
are required. A maximal clique problem is one of the graph processing methods which is
used in many applications. The Bron-Kerbosch (BK) algorithm is the most widely used and
accepted algorithm for listing each and every maximal clique in a graph. Here an idea of a
parallel version of BK algorithm is proposed which will reduce the computation time to a
large extent than its serial implementation. It utilizes the cluster computing strategy.
1 INTRODUCTION
1.2 BK algorithm
Of all the algorithms for maximal clique enumeration, the Bron-Kerbosch (BK) algorithm is
the most widely used method. Designed by two Dutch scientists, Joep Kerbosch and Coen-
raad Bron, this algorithm was published in 1973. However, although many different algo-
rithms, which are theoretically better than BK on the inputs, have been proposed, BK still
remains better in practical applications for listing all the cliques.
879
2 BACKGROUND
A maximal clique is a completely connected subgraph which is not the subset of any other
bigger clique in that graph, i.e. we can never expand an already existing clique (which is maxi-
mal in nature) by adding one more neighboring vertex to it. Every graph having n vertices can
have a maximum of 33 maximal cliques (Tomita et al., 2006).
The biggest clique in a graph is the maximum clique. The maximum common subgraph
problem can be reduced to maximal clique enumeration problem (Jayaraj et al., 2016) where
the former is an NP-complete problem. Figure 1 shows maximal cliques in an example graph G.
The maximal clique enumeration problem originally arose in different areas of research
as a set of related problems. The algorithms for solving those issues can be regarded as the
initial algorithms for clique detection. The algorithm of Harary and Ross, (1957) was the
first broadly acknowledged endeavor for listing all of the cliques which are maximal in a
graph. which displays a technique for discovering connections between individuals utilizing
the sociometric information between them which will form a clique.
3 LITERATURE SURVEY
With the introduction of the backtracking search strategy, the maximal clique enumeration
problem gained momentum to a great extent. The efficiency of combinatorial optimization algo-
rithms is improved by using backtracking which limits the search space size of the algorithm.
A set of viability criteria is established that is used to help the backtracking algorithm from
explorating the non-promising paths (Schmidt et al., 2009).
880
881
Here we would like to implement a new parallel version of the already existing BK algo-
rithm, which will be efficient in maximal clique enumeration. Here we take each node in the
graph and explore its neighbors in a parallel manner, i.e. each node will be explored by an
independent worker node thus reducing the overall computation time. Here we have a cluster
manager who divides the work between the worker nodes. The worker nodes, which are many
in number, work simultaneously to produce a faster result.
In the proposed system, the graph is stored in a file as an edge list, i.e., each edge will be
represented individually by a set of the two vertices it connects. The graph will be processed
using branch and bound criteria and a tree will be created. Each worker node will traverse the
tree in a bfs manner to find the cliques in it. Later, the result from all the worker nodes will
be combined to form the list of all maximal cliques.
Here we can implement this by using any parallel framework such as graph X in Apache
Spark, or by using CUDA. Graph X is built on top of Apache Spark as an embedded frame-
work and is used for processing graphs. It’s a distributed dataflow system which is widely
used. The sufficient graph abstraction that is required to express existing graph APIs is being
provided by graph X (Gonzalez et al., 2014).
CUDA is the parallel computing architecture of NVIDIA that by harnessing the power of
the graphics processing unit enables the users to have an increased computing performance.
This can also be used to implement an efficient parallel version of BK algorithm.
5 CONCLUSIONS
Many real-world problems can be modeled using graphs. And, maximal clique enumeration
problem has a wide range of applications in the fields of drug discovery and its analysis,
social hierarchy detection and many more. The serial computation of maximal cliques is a
time-consuming task when the sizes of graphs are large and we know that the real-world
graphs are mostly large in size. Thus, here we propose a parallel algorithm for the already
existing BK algorithm for the processing of large graphs. The BK algorithm is one of the
most widely accepted graph algorithms for listing all the maximal cliques present in a graph.
REFERENCES
Bron, C. & Kerbosch, J. (1973). Algorithm 457: finding all cliques of an undirected graph. Communica-
tions of the ACM, 16(9), 575–577.
Gonzalez, J.E., Xin, R.S., Dave, A., Crankshaw, D., Franklin, M.J. & Stoica, I. (2014). Graphx: Graph
processing in a distributed dataflow framework. In OSDI, Volume 14, (pp. 599–613).
Harary, F. & Ross, I.C. (1957). A procedure for clique detection using the group matrix. Sociometry
20(3), 205–215.
Jayaraj, P., Rahamathulla, K. & Gopakumar, G. (2016). A GPU based maximum common subgraph
algorithm for drug discovery applications. In IEEE International Parallel and Distributed Processing
Symposium Workshops (pp. 580–588). New York, NY: IEEE.
Mukherjee, A. & Tirthapura, S. (2017). Enumerating maximal bicliques from a large graph using
MapReduce. IEEE Transactions on Services Computing.
Schmidt, M.C., Samatova, N.F. Thomas, K. & Park, B.-H. (2009). A scalable, parallel algorithm for
maximal clique enumeration. Journal of Parallel and Distributed Computing, 69(4), 417–428.
Tomita, E., Tanaka, A. & Takahashi, H. (2006). The worst-case time complexity for generating all maxi-
mal cliques and computational experiments. Theoretical Computer Science, 363(1), 28–42.
Wen, X., Chen, W.-N., Lin, Y., Gu, T., Zhang, H. Li, Y., Yin, Y. & Zhang, J. (2017). A maximal clique
based multi objective evolutionary algorithm for overlapping community detection. IEEE Transac-
tions on Evolutionary Computation, 21(3), 363–377.
Xu, Y., Cheng, J. & Fu, A.W.-C. (2016). Distributed maximal clique computation and management.
IEEE Transactions on Services Computing, 9(1), 110–122.
882
ABSTRACT: Different techniques have been utilized to study how to find similar questions
from recorded archives. These similar questions will have multiple answers associated with them
so that end-users have to carefully browse to find a relevant one. To tackle both problems, a novel
method of retrieving and ranking similar questions, combined with a data-driven approach of
selecting answers in Community Question Answering (CQA) systems, is proposed. The presented
approach for similar question retrieval combines a regression procedure that maps topics deter-
mined from questions to those found from question-answer pairs. Applying this can avoid issues
due to distinctions in vocabulary used within question-answer sets and the inclination of queries
to be shorter than their answers. To alleviate answer-ranking problems, a scheme via pairwise
comparisons is presented. In the offline learning component, the scheme sets up positive, negative,
and neutral training samples represented as preference pairs by means of data-driven perceptions.
The model incorporates these three sorts of training samples together. At that point, utilizing the
offline prepared model, the answer candidates are sorted to judge their order of preference.
1 INTRODUCTION
One of the fastest developing customer-generated content portals, the Community Question
Answering (CQA) system has emerged as a huge market that satisfies complex information
needs. CQA provides a platform for customers to ask questions on any topic and also answer
others as they wish. It also enables a search through the recorded past Question-Answer (QA)
set. Conventional factual QA can be answered by simply retrieving named entities or content
from available documents, while CQA extends its significance to answer complicated ques-
tions such as reasoning, open-ended, and advice-seeking questions. CQA places few restric-
tions, if any, on who can post and who can answer a query and is thus quite open. Both the
general CQA sites such as Yahoo! Answers and Quora, and the specialized ones like Stack
Overflow and HealthTap, have had a significant influence on society in the last decade.
Even though there is active user participation, certain phenomena result in question dep-
rivation in CQA portals. For example, users have to wait a long time before getting responses
to their queries and a considerable number of questions never get any answer, and the askers
are left unsatisfied. The situation is probably caused by the following: (1) the posted queries
may be ambiguous, ineffectively stated or may not invoke curiosity; (2) the CQA systems may
not effectively direct recent questions to the appropriate answerers; (3) the potential answer-
ers, having the required knowledge, may not be available or are overwhelmed by the number
of incoming questions. This third situation often arises in specialized CQA portals, where
answering is restricted to authorized specialists only. With reference to the first case, ques-
tion quality modeling can check the question quality and can assist in requesting that askers
restructure their queries. For the other two cases, the situation can be addressed by means of
question routing. Question routing is performed by expertise matching and consideration of
the likelihood of potential answerers. It works by exploring the human resources currently
associated with the system. Besides that, solved past queries can be reused to answer newly
883
2 LITERATURE REVIEW
State-of-the-art techniques such as BOW, with its specific properties, term frequency–
inverse document frequency (tf-idf) and BM25 (Robertson & Walker, 1997), can calculate
lexical similarity between two documents, but they do not consider their semantic and con-
textual information. In the past decade, topic modeling has been used as an important tech-
nique in the field of text analysis. The topics that characterize documents can be treated as
its semantic representation. Therefore, to find the semantic similarity between documents,
we can use topic distributions obtained using LDA. Further, various approaches for apply-
ing topic modeling to historical QA have been proposed. For finding similar questions
to the problem, topic modeling along with topic distribution regression (Chahuara et al.,
2016) is used.
Consider a corpora C of size L with C consisting of many question-answer pairs:
where Q = {q1, q2,…, qL} and A = {a1, a2,…, aL} ∀ (qi, ai) ∈ C: qi ∈ Q, ai ∈ A are, respectively,
question and answer sets.
Answers compared with questions in such portals are likely to be longer and the questions
may have only limited relevant words. This can restrict a model’s capacity to detect hidden
trends. A way of overcoming this is by inferring each question qi such that it has its text along
with a title and description. Moreover, every qi may be associated with multiple answers and
these answers are concatenated to obtain each term ai. This is done so as to best determine
the question’s relevance based on the contextual details provided by them. Figure 1 illustrates
the proposed framework. The job of retrieving similar questions can be optimized to the task
of ranking the QA pairs contained in the created set C, assigning a similarity to question
q, and generating a result having its top-ranked element as that with the highest similarity
found.
In the learning phase of the task of extracting similar questions, the set C already created is
used in training two topic models: first, LDA on the question set Q; second, LDA on the ques-
tion-answer pair set QA. The learning phase provides topic distributions associated with the
sets Q and QA, θiQ and θiQA as the result. Using these topic distribution samples, a regression
884
model is trained to learn the translation function between the Q and QA distributions. Dur-
ing deduction, using the Q set LDA model, we determine the topic distribution of a question
(θ*Q), which is mapped to its probable QA topic distribution (θ*QA) using the trained regression
model. Last, according to the similarity between each pair’s topic distribution and the new
question’s QA topic distribution, a similarity value is calculated and the value is used to rank
the QA corpora. The questions, respective to the QA distribution, that were found similar to
the presented question can be considered as the output of the primary batch of processing.
The answer-selection problem in CQA is analogous to the conventional ranking task,
where the given question and its set of answers are comparable to a query and a set of
relevant entities. The objective is thus optimized to find an ideal ranking system of the
answer candidates according to their pertinence, exactness and quality with respect to
the given query. A ranking function which uses relevance intuition can be designed in
the following three ways: (1) pointwise − in this type of method (Dalip et al., 2013; Shah
& Pomerantz, 2010) the relevance measure of each individual QA pair is estimated by
a standard classification or regression model; (2) pairwise − in these kinds of methods
(Bian et al., 2008; Hieber & Riezler, 2011; Cao et al., 2006; Li et al., 2015), the preference
of two answer candidates is predicted using a 0–1 classifier; (3) listwise − in this type, the
integrated ranking of all candidate answers to the same question is performed at the same
time (Xu & Li, 2007).
The answer selection in the problem is addressed using the novel PLANE model proposed
by Nie et al. (2017). Given a question, it requires a set of top k relevant questions Q = q1,…,qk
from the QA repositories and, according to Nie et al. (2017), this is done using a question-
matching algorithm k-NN. According to this, question qi is assumed to have a set of mi ≥ 1
answers, represented by Ai = a0, a1,…, ami whereby ai0 is the answer of qi selected as best by
community users. From the identified relevant questions, a learning-to-rank design is devel-
oped to sort all the answers associated with them. Two training sets X and U are built from
the set of QA pairs. x1 and x2 denote the N-dimensional feature vectors of the two QA pairs
that are compared in a single comparison. Also, y is denoted as the preference relationship of
x, whose value is found as below:
y = + 1, if x1 > x2
y = − 1, if x2 > x1
885
where ≅ represents a neutral preference relationship between the two QA pairs under consid-
eration. u(1) and u(2) represent, respectively, the N-dimensional feature vectors of the two QA
pairs. Taking into account all pairs of comparisons in this pattern, U = {(uj,0)}Mj = 1 is created.
Jointly incorporating X and U, the following pairwise learning-to-rank model is proposed:
N M
min ∑ [1 − yi wT x i ] + λ w 1 + µ ∑ wT u j ,
w
i =1 j =1
where xi = xi1 – xi2 ∈ RN and uj = uj1uj2 ∈ RD denote the two training instances from X and
U, respectively; symbols N and M denote, respectively, the number of preference pairs in
X and U, and the desired coefficient vector is represented by w ∈ RN. The first term in the
equation indicates a hinge loss function, which helps in the binary preference judgment job,
and it gives a relatively rigid and convex upper limit on the binary indicator function. The
conventional formulation for the Support Vector Machine (SVM) can be considered equiva-
lent with empirical risk minimization of this loss. Pertaining to the support vectors, points
lying outside the margin boundaries that are properly classified will not be penalized, while
points on the wrong side of the hyperplane or within the margin boundaries will be penalized
in a linear mode, proportional to their distance from the proper boundary. The second term
represents a l1 norm. It helps in feature selection and in regularizing the coefficient value’s
summation that helps in penalizing the preference distance between unimportant answers of
the same question.
3 PROPOSED FRAMEWORK
The system tries to solve the problem of finding similar questions and selecting an answer
from them using the proven efficient methods of the relevant area. For the purpose of train-
ing, we require a large amount of data. The historical archive data from online general CQA
websites such as Quora and Yahoo! Answers, and specialized ones such as Stack Overflow and
HealthTap, are suitable as sources of data for the system. Retrieving this data and processing
it into the form required comprises the first phase of the framework.
The data is then fed to the topic modeling phase using the LDA method. LDA works
well for topic modeling with data from a variety of topics. As the data is from the CQA,
where questions from different fields appear, the LDA should be efficient. After finding the
translation function between questions and question-answer pairs, as described in Section 2,
using the regression model, the topic model of the expected answer of the new question is
determined. Those question-answer pairs that are similar to this found topic are considered
to be “similar questions”.
Now, using these question-answer pairs, the PLANE model is trained in the form of posi-
tive, negative and neutral preference pairs as described in Section 2. The PLANE model is
trained offline using the constructed pairs. On providing the new input question, it returns
the ranked answers. The best answer will be the one that solves the similar question and that
was chosen by users. Finally, the system is evaluated for its performance by comparison with
systems with other techniques. Accuracy, precision and recall are the metrics that can meas-
ure the performance of such learning systems. Practically, user feedback can be collected
from user satisfaction with the related answer shown. On new questions arriving at the site
under the same topic, a trace-back mechanism can increase the efficiency of prediction.
886
4 EVALUATION OF PERFORMANCE
The data required for the system is from the online community question-answering websites.
As it tries to solve a real-time problem of finding a suitable answer at the time a new ques-
tion arrives, the efficiency of the system can be measured primarily from user feedback. The
objectives for which the model was designed can be evaluated to check whether they were
met:
• to return the best existing relevant information from the historical archive data;
• to reduce the waiting time to get answers to the question.
Because the two phases in the system use novel techniques that are said to outper-
form other methods for the same problem, the combination of both should give better
efficiency.
887
This paper proposes a combination of two techniques to address answer retrieval from
archive data for a new question that arrives. For finding similar questions from the existing
data, topic modeling using LDA and regression models is preferred. In the successive phase,
for ranking answers, an enhanced SVM model named PLANE is used. As mentioned above,
the system uses novel techniques that are proven to outperform other methods, and the pro-
posed system as a whole should provide better results in terms of accuracy.
REFERENCES
Bian, J., Liu, Y., Agichtein, E. & Zha, H. (2008). Finding the right facts in the crowd: Factoid question
answering over social media. In Proceedings of the 17th International Conference on World Wide Web
(pp. 467–476). New York, NY: ACM.
Cao, Y., Xu, J., Liu, T.Y., Li, H., Huang, Y. & Hon, H.W. (2006). Adapting ranking SVM to document
retrieval. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and
Development in Information Retrieval (pp. 186–193). New York, NY: ACM.
Chahuara, P., Lampert, T. & Gancarski, P. (2016). Retrieving and ranking similar questions from
question-answer archives using topic modelling and topic distribution regression. In N. Fuhr, L.
Kovács, T. Risse & W. Nejdl (Eds.), Research and advanced technology for digital libraries. TPDL
2016. Lecture Notes in Computer Science (Vol. 9819, pp. 41–53). Cham, Switzerland: Springer.
Dalip, D.H., Goncalves, M.A., Cristo, M. & Calado, P. (2013). Exploiting user feedback to learn to rank
answers in Q&A forums: A case study with stack overflow. In Proceedings of the 36th International
ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 543–552). New
York, NY: ACM.
Hieber, F. & Riezler, S. (2011). Improved answer ranking in social question-answering portals. In Pro-
ceedings of the 3rd International Workshop on Search and Mining User-Generated Contents (pp.
19–26). New York, NY: ACM.
Li, X., Cong, G., Li, X.L., Pham, T.A.N. & Krishnaswamy, S. (2015). Rank-geoFM: A ranking based
geographical factorization method for point of interest recommendation. In Proceedings of the 38th
International ACM SIGIR Conference on Research and Development in Information Retrieval (pp.
433–442). New York, NY: ACM.
Nie, L., Wei, X., Zhang, D., Wang, X., Gao, Z. & Yang, Y. (2017). Data-driven answer selection in com-
munity QA systems. IEEE Transactions on Knowledge and Data Engineering, 29(6), 1186–1198.
Robertson, S.E. & Walker, S. (1997). Some simple effective approximations to the 2-Poisson model for
probabilistic weighted retrieval. Readings in Information Retrieval, 345, 232–241.
Shah, C. & Pomerantz, J. (2010). Evaluating and predicting answer quality in community QA. In Pro-
ceedings of the 33rd International ACM SIGIR Conference on Research and Development in Informa-
tion Retrieval (pp. 411–418). New York, NY: ACM.
Xu, J. & Li, H. (2007). AdaRank: A boosting algorithm for information retrieval. In Proceedings of
the 30th Annual International ACM SIGIR Conference on Research and Development in Information
Retrieval (pp. 391–398). New York, NY: ACM.
888
J. Manjusha
Department of Computer Science and Engineering, Government Engineering College, Thrissur, India
APJ Abdul Kalam Technological University, Kerala, India
A. James
Department of Computer Science and Engineering, Government Engineering College, Thrissur, India
Saravanan Chandran
National Institute of Technology, Durgapur, West Bengal, India
1 INTRODUCTION
Optical Character Recognition (OCR) is one of the challenging areas of computer vision
and pattern recognition. It is the process of converting text or handwritten documents into a
scanned and digitized form for easy recognition. It is categorized as offline and online based
on the method by which image acquisition is carried out. Malayalam handwritten character
recognition is very important because of its use by a large population of people but the vari-
ability in writing style and representation make it difficult for recognition. Some of the appli-
cations of OCR is data entry for business applications like cheques processing and passport
verification. Handwritten character recognition includes like image acquisition, preprocessing,
segmentation of characters, feature extraction, and recognition. Compared to all other stages,
feature extraction plays an important role in determining the accuracy of recognition. Some
of the challenges faced by handwritten Malayalam character recognition systems includes the
un-availability of a standard data set and the unlimited variations in human handwriting. The
traditional methods usually require artificial feature design and manually tuning of the classi-
fier. Particularly, the performance of the traditional method can be determined to a large extent
by using empirical features. The traditional method has reached its limit through decades of
research while the emergence of deep learning provides a new way to break this limit. In this
paper, a CNN-based (Convolutional Neural Network-based) handwritten character recogni-
tion framework is proposed. In this framework, proper sample generation, training scheme and
CNN network structure are employed according to the properties of handwritten characters.
Here we discuss handwritten character recognition using a CNN model called AlexNet.
889
In the area of Malayalam character classification, previous works have mainly focused on
traditional feature extraction methods, which take the most time since it contains the larger
number of character classes. Different proposed methods having high recognition rates were
reported for the handwritten recognition of Chinese, Japanese, Tamil, Bangla, Devanagari
and Telugu using CNN.
El-Sawy (2017) proposed handwritten Arabic character recognition using CNN. The CNN
model was trained and tested using a database of 16,800 handwritten Arabic character images
with an average misclassification error of 5.1% for the test data.
Tsai’s (2016) model is a Deep Convolutional Neural Network (D-CNN) for recognizing
handwritten Japanese characters. A VGG-16 network with 11 different convolutional neural
network architectures were explored. The general architecture consists of a relatively small
convolutional layer followed by an activation layer and a max pooling layer with a final FC
(Fully Connected) layer having the same number of channels as the number of classes. It
achieved an accuracy rate of 99.53% for overall classification.
Roy et al. (2017) introduced a layer-wise deep learning approach for isolated Bangla hand-
written compound characters. Supervised layer-wise trained D-CNNs are found to outper-
form standard shallow learning models such as Support Vector Machines (SVM) as well as
regular D-CNNs of similar architecture by achieving an error rate of 9.67%, and thereby
setting a new benchmark on the CMATERdb 3.1.3.3 with a recognition accuracy of 90.33%,
representing an improvement of nearly 10%.
For Chinese (Xuefeng Xiao et al., 2017) to incur the high computational cost of deeper
networks a Global Supervised Low-Rank Expansion (GSLRE) method and an Adaptive
Drop-Weight (ADW) technique was used. For HCCR with 3,755 classes a CNN network
with nine layers was adopted, that can reduce the networks computational cost by nine times
and compress the network to 1/18 of the original size of the baseline model, with only a
0.21% drop in accuracy.
Md. Mahbubar Rahman et al. (2015) considered a CNN-based Bangla handwritten
character recognition system. The normalized data was used and employed CNN to clas-
sify isolated character recognition. 20,000 handwritten characters with different shapes and
variations were used in this study. The proposed BHCR-CNN misclassified 351 cases out of
2,500 test cases and achieved an accuracy of 85.96%. On the other hand, the method misclas-
sified 954 characters out of 17,500 training characters giving an accuracy rate of 94.55%.
An integrated two classifier – CNN and SVM – model for Arabic character recognition
was introduced by Mohamed Elleuch and Kherallah (2016) with a dropout technique thereby
reducing the over fitting. The performance of the model was compared with the character
recognition accuracies gained from state-of-the-art Arabic Optical Character Recognition.
The error rate without dropout layer was recorded as 14.71% and a considerable reduction
error rate of 5.83% using dropout was reported.
An unsupervised CNN model for feature extraction and classification of multi script rec-
ognition was proposed by Durjoy Sen Maitra and Parui (2015). For a larger character class
problem, they performed a certain amount of training for a five-layer CNN. SVM was used
as the classifier for six different character databases all of which have achieved an error rate
less than 5%.
Another approach for handwritten digit recognition using CNN and SVM for feature
extraction and recognition respectively (Chunpeng Wu et al., 2014) achieved a recognition
rate of 99.81% without rejection, and a recognition rate of 94.40% with 5.60% rejection.
Jinfeng Bai et al. (2014) proposed a Shared Hidden Layer deep Convolutional Neural Net-
work (SHL-CNN), which recognizes both English and Chinese image characters, produced a
reduced recognition error of 16–30%, compared with models trained by characters of only one
language using conventional CNN, and by 35.7% compared with state-of-the-art methods.
Zhong et al., (2015) presented a deeper CNN architecture for handwritten Chinese
character recognition (denoted as HCCR-GoogLeNet) which uses 19 layers in total.
890
3 PROPOSED METHOD
3.3 Preprocessing
Performance of any character recognition system is directly dependent upon the quality of
the input documents. To remove the noise from the character images a data preprocessing
method is used. Salt and pepper noise are the most common noise elements present in an
891
3.5 Testing
Testing module deals with the test images. Test images are obtained by splitting the augmented
dataset randomly. It will first preprocess the input image and it will classify the unlabeled test
892
3.6 Classification
The final layer of the CNN is a softmax layer (Pranav P Nair et al., 2017) and this softmax
layer is used for classifying the given input image. This softmax layer is used to classify the
character. The softmax function has a value between 0 and 1. The sum of the output of
all the classes is also 1. The class with the maximum value will be selected as the class for a
particular input image.
Character recognition has a wide range of applications in postal automation, tax returns, bank
cheque processing and many more. Even though works on Malayalam character recognition
have been reported none has achieved 100% accuracy and most of them take a large processing
time.
Our proposed method uses automatic feature extraction using CNN that considerably
reduces the training and testing time and produces a more accurate result. Here compound
Malayalam character recognition is also considered, which makes the system more useful for
real time application processing. The model needs a system with Core i7 PC of 2.6 GHz with
64GB memory with CUDA enabled GPU for parallel processing.
REFERENCES
Bhanushali, S., Tadse, V. & A. Badhe, A. (2013). Offline handwritten character recognition using neural
network. In Proceedings of National Conference on New Horizons in IT-NCNHIT (pp. 155).
Chunpeng Wu, Wei Fan, Y.H.J.S.S.N. (2014). Handwritten character recognition by alternately trained
relaxation convolutional neural network. In 14th International Conference on Frontiers in Handwriting
Recognition (pp. 291–296).
Durjoy Sen Maitra, U.B. & Parui, S.K. (2015). CNN based common approach to handwritten character
recognition of multiple scripts. 13th International Conference on Document Analysis and Recognition
(ICDAR) (pp.1021–1025).
El-Sawy, A. (2017). Arabic handwritten characters recognition using convolutional neural network.
WSEAS Transactions on Computer Research 5, 2415–1521.
Gupta, A., Srivastava, M. & Mahanta, C. (2011). Offline handwritten character recognition using neural
network. In IEEE International Conference on Computer Applications and Industrial Electronics
(ICCAIE) (pp. 102–107). New York, NY: IEEE.
Jinfeng Bai, Zhineng Chen, B.F.B.X. (2014). Image character recognition using deep convolutional
neural network learned from different languages. In IEEE International Conference on Image
Processing (ICIP) (pp. 2560–2564).
Krizhevsky, A., Sutskever, I. & Hinton, G.E. (2012). Imagenet classification with deep convolutional
neural networks. In Pereira, F., Burges, C., Bottou, L. & Weinberger, K. (Eds). Advances in Neural
Information Processing Systems, 25, pp. 1097–1105. Curran Associates, Inc.
Md. Mahbubar Rahman, M.A.H. Akhand, S.I.P.C.S. (2015). Bangla handwritten character recognition
using convolutional neural network. International Journal of Image, Graphics and Signal Processing 8,
42–49.
Mohamed Elleuch, R.M. & Kherallah, M. (2016). A new design based-SVM of the CNN classifier
architecture with dropout for offline Arabic handwritten recognition. In International Conference on
Computational Science (ICCS) 80, (pp. 1712–1723).
Pranav P Nair, Ajay James, C.S. (2017). Malayalam handwritten character recognition using
convolutional neural network. International Conference on Inventive Communication and
Computational Technologies (ICICCT 2017) (pp. 278–281).
893
894
1 INTRODUCTION
895
use combined training data for both scripts, but such OCRs have not been very successful
due to the huge search required in a large database. Such OCRs also suffer from errors when
words are classified as belonging to the wrong script (Kaur & Mahajan, 2015). Thus, most
MOCRs perform word segmentation followed by script identification and then recognition.
This paper discusses the script identification for English, Hindi and Malayalam scripts in
documents. This is achieved by extracting features from the horizontal projection profile of
text lines.
The rest of the paper deals with some of the previous works in section 2 and proposed
work in section 3. This paper concludes in section 4.
2 PREVIOUS WORKS
Philip and Samuel (2009) used dominant singular values and Gabor features for classification of
printed English and Malayalam script. To identify the script the text is segmented to word level
and Gabor features are extracted. Dominant singular values are used for character recognition.
896
3 PROPOSED WORK
This paper presents a novel script identification technique for identifying English, Hindi and
Malayalam scripts from multiscript documents. The steps of the proposed system are shown
in Figure 2.
897
3.2 Preprocessing
Image preprocessing is the process of improving image quality for better understanding using
predefined methods. Commonly used preprocessing methods are noise removal, binariza-
tion, and skew correction. The noise may occur during scanning or transferring of document
images. Smoothing operations are used for noise removal. Converting gray scale image from
256 gray levels to two levels is called binarization. The binarization is generally done by tak-
ing a threshold value for an image and set intensity values to one for pixels which have larger
value than threshold value. Set intensities to zero if it is less than the threshold value. Skew is
a deformation that is introduced while scanning a document. It is necessary for aligning the
text lines to the coordinate axes.
3.3 Segmentation
White gap between text lines are used for segmentation. Horizontal projection of the scanned
image is used for line segmentation. Horizontal projection of a trilingual document is shown
in Figure 3. The zero value points in the figure represents the spaces between text lines. Line
segmentation is performed at these points.
898
Range of
Script P value
3.5 Classification
Classification is based on the features extracted in the previous step. For this proposed sys-
tem, a rule-based classifier is used.
STEP 1: For each text line determine the first and second peak points P1 and P2 respectively.
STEP 2: Find the values at P1, value (P1), and P2, value (P2).
STEP 3: Calculate the decision parameter Pvalue from these values as
value (P1)
P value = (1)
value (P 2 )
STEP 4:
If P value falls in the range of 0.85 to 0.95 then the text is identified as Malayalam.
If P value falls in the range of 1.10 to 1.30 then the text is identified as English.
If P value falls in the range of 2.00 to 2.50 then the text is identified as Hindi.
The range of Pvalues for English, Hindi and Malayalam scripts are shown in Table 1.
Based on these observed values, rules are made and fed into the rule-based classifier.
4 CONCLUSIONS
In this paper, a novel technique for script identification in English, Hindi and Malayalam
multilingual documents has been proposed. Features for classification are extracted from the
horizontal projection of scripts. A rule-based classifier is built from the knowledge obtained
from the sample data.
REFERENCES
Aithal, P.K., Rajesh, G., Acharya, D.U. & Subbareddy, N.K.M. (2010). Text line script identification
for a tri-lingual document. In International Conference on Computing Communication and Networking
Technologies (ICCCNT) (pp. 1–3). New York, NY: IEEE.
899
900
1 INTRODUCTION
The emerging technology Software Defined Networking (SDN) emphasizes the decoupling
of control plane and the data plane. This separation helps to provide only an abstract view of
network resources and its state to the external applications. In large-scale SDN, the control
plane is composed of multiple controllers that provide the global view of the entire net-
work. The controllers are programmable, which acts as an intermediary between the network
administrator and data plane. In control plane, control intelligence is decoupled. Therefore,
The controllers install the rules in the flow table that are used to forward the traffic flows
entering the network and switching is done by OpenFlow (OF) switches in the forwarding
plane. However, the separation of control-data plane introduces performance limitations and
reliability issues (Yeganeh et al. 2013, Jarschel et al. 2012). They are:
a. The nodes in the network must be continuously monitored and controlled by using a
proactive or reactive method. The nodes communicate with the corresponding controllers
to obtain the new forwarding rules to be installed. Based on these rules, the nodes process
various new flows arriving at it. The response time of the overall system will increase when
the communication overhead between controllers and switches is high. This is because the
controller has a limited processing power (Yeganeh et al. 2013), with respect to the number
of nodes assigned to it or the number of flow queries is too high.
b. In large-scale SDN, the density of network elements and traffic flows are really high. If
we are using a single physically centralized controller, there is a chance for Single Point
of Failure (SPoF). So multiple controllers have to be placed to ensure that SPoF will
be eliminated from the network. In SDN, control plane creates the logically centralized
view of the entire network. Inorder to create this view, the controllers need to communi-
cate with each other and update/synchronize their databases (Tootoonchian and Ganjali
2010). To reduce the inter-communication among the controllers, we can create an overlay
network linking the controllers (Shi-duan 2012).
901
2 RELATED WORKS
This section gives an overview of some related works on controller placement in large-
scale SDN. The main problem is to find the number of SDN controllers that are required
for a given network topology and where to place them so that it resolutely maintains the
performance even in cases of failure
In WANs, if the placement of controller introduces a significant increase in path delays,
then it will postpone the time taken for control plane to achieve the steady state. It has been
explained in (Heller et al. 2010) which is theoretically not new. If only propagation latency is
considered, the issue is akin to a warehouse or facility location problem, which can be solved
by the use of Mixed Integer Linear Program (MILP) tools.
The Heller et al. prior work (Heller et al. 2010) encourages to consider the problem of
controller placement and measures the impact of controller placement on existing topologies
like Internet2 (Yeganeh et al. 2013) and on various cases available in the Internet Topology
Zoo (Knight et al. 2011). In reality, the main purpose was not to locate the ideal positions
for the controllers but to provide an evaluation of a major design problem which requires
further examination. It has been demonstrated that optimal solutions can be discovered for
practical network instances in failure-free cases by figuring out the complete solution with
offline calculations. This work also affirms that in most topologies the existing response time
requirements cannot be satisfied using a single controller, although they didn’t consider the
resiliency aspects.
(Zhang et al. 2016) address Multi-objective Optimization Controller Placement (MOCP)
problem and focus on three objectives; maximize controller load balance capacity, maxi-
mize network reliability and minimize control path latency. It provides an optimal controller
placement strategy in such a way that the routing requests are optimally distributed among
multiple controllers. The work converted the MOCP into a mathematical model as the
optimization objective function and developed Adaptive Bacterial Foraging Optimization
(ABFO) algorithm to resolve it, claiming those above objectives are optimized efficiently and
effectively. But in this method, the optimal number of controllers needed not is identified
dynamically. For a large-scale network, this identification becomes exhaustive and will reduce
the reliability of the solution.
The work (Borcoci et al. 2015) presented an analytical view on using multi-criteria deci-
sion algorithms (MCDA) to choose an optimal solution from several controller placements
902
3 PROPOSED SYSTEM
The main goal of the proposed system is to design an approach that automatically computes
the optimal number of controllers that are needed to manage the given SDN network and its
corresponding locations.
process, the bandwidth of links connected to it can be considered. The overall design of the
proposed method is given in Figure 1.
The proposed system is advanced in four submodules; network topology creation, link
weight calculation, network partition and controller placement. In the first module, network
topologies are created using GARR and GEANT (Knight et al. 2011). The bandwidth of the
links present in these topologies and latency are considered to derive link weight. The third
module partition the network using modified affinity propagation. Finally, the controllers
are placed in the locations of exemplars.
For the modified-Affinity Propagation method, there is no need to initialize the number
of controllers and its locations. Affinity Propagation (AP) is an exemplar-based clustering
approach that has numerous benefits which include performance, no need to initialize the
value of k, and ability to obtain exemplars with high accuracy. The AP is modified in such a
way that it adapts to the problem of controller placement in SDN. Especially, the similarity
measurement between two nodes adopts both latency and bandwidth of the links.
In a real network topology, there may be no direct links between two nodes. But they are
reachable via other links present in the network. So the reachable shortest path distance
L(u, v) is considered as latency (Zhao et al. 2016), which can be found by using the Floyd-
Warshall all-pairs shortest paths algorithm (Cormen et al. 2009). The bandwidth of the links
is already available in the internet topologies GARR and GEANT.
The exemplar-based clustering problem is formulated by using the link weight which is
assigned to each edge, considering the fact that it minimizes the latency and equalizes the
load of the controller. Link weight is composite metric, which is computed using Eq. 1.
An assumption is made that the locations of controllers are the same as some of the nodes.
904
1
Lavg = ∑ min L(v, c)
n v ∈V c ∈C
(2)
Any of above mentioned latencies need to be minimum for the optimization algorithm that
should obtain a placement of controller.
where nc denotes the number of nodes under the controller c. For Eq. 4, need to take
consideration that during the case of failures the control of nodes may be moved from
primary controller to other controllers. This reassignment can increase the load of respective
controllers. An optimization algorithm should minimize Eq. 4 in order to find a controller
placement that provides good performance.
where L(cu, cv) is the distance between two controllers cu and cv. For minimizing Eq. 5, we
need to place the controllers close to each other. But the problem is that, this may increase
the node-controller latencies given by Eq. 2 and Eq. 3.
905
This paper has shown a work (in progress) study on the application of several weighted criteria
for placement of controller in large-scale SDN. The proposed method dynamically compute
the number of controllers needed and its location for a given network using modified affinity
propagation (modified-AP) clustering. This approach additionally specifies the most suitable
controller for each switch. This method can be applied to a scenario that considers failure-
free assumptions, given that it attains an overall optimization.
As future work, the proposed method can be extended to include fault-tolerance and reli-
ability aspects to improve the efficiency of the controller placement. The simulations can
be conducted on large-scale SDN by considering additional metric like the capacity of the
controllers.
REFERENCES
Borcoci, E., R. Badea, S.G. Obreja, & M. Vochin (2015). On multi-controller placement optimization
in software defined networking-based wans. ICN 2015: The Fourteenth International Conference on
Networks, 261–266.
Cormen, T., C. Leiserson, & R. Rivest (2009). Introduction to algorithms. Massachusetts, USA: The
MIT Press.
Heller, B., R. Sherwood, & N. McKeown (2010). The controller placement problem. In Proc. HotSDN,
pp. 7–12.
Jarschel, M., F. Lehrieder, Z. Magyari, & R. Pries (October 2012). A flexible openflow-controller
benchmark. In Proc. European Workshop on Software Defined Networks (EWSDN), Darmstadt,
Germany, pp. 48–53.
Knight, S., H.X. Nguyen, N. Falkner, R. Bowden, & M. Roughan (2011). The internet topology zoo.
IEEE JSAC 29, 1765–1775.
Shi-duan (October 2012). On the placement of controllers in software defined networks. ELSEVIER,
Science Direct 19, 92–97.
Tootoonchian, A. & Y. Ganjali (2010). Hyperflow: a distributed control plane for openflow. In Proc.
INM/WREN.
Xiao, P.,W. Qu, H. Qi, Z. Li, & Y. Xu (2014). The sdn controller placement problem for wan. In
Communications in China (ICCC), 2014 IEEE/CIC International Conference on, pp. 220–224. IEEE.
Yeganeh, S.H., A. Tootoonchian, & Y. Ganjali (February 2013). On scalability of software-defined
networking. IEEE Comm. Magazine 51, 16–141.
Zhang, B., X. Wang, L. Ma, & M. Huang (2016). Optimal controller placement problem in internet-
oriented software defined network. In Proc. International Conference on Cyber-Enabled Distributed
Computing and Knowledge Discovery (CyberC), pp. 481–488.
Zhao, J., H. Qu, J. Zhao, Z. Luan, & Y. Guo. (2016). Towards controller placement problem for software-
defined network using affinity propagation. Electronics Letters 53, 928–929.
906
ABSTRACT: Twitter is one platform where people express their thoughts on any trending
topics they are interested in. The exploration of this data can help us to find peer groups or
group of users with similar interests. As in any other social network, this is also subjected to
various spam attacks. So before identifying peer groups, the accounts that are ingenuine or
regularly involved in spamming activities has to be filtered out. The main idea is to make use
of the URLs the accounts share and their frequency to identify the account type.Here instead
of focusing on one account, a group of accounts or a campaign is identified based on the
similarity of the accounts. The similarity measure is calculated by applying Shannon’s Infor-
mation theory to estimate the amount of information in a URL and then using the value to
find out information shared by each account. Once similar accounts are identified a graph
is plotted connecting those accounts who have a similarity measure above a threshold. The
potential campaigns are identified from this graph. Then they are classified to spammers and
normal users using ML algorithms. The normal users we thus identify are members who have
similar interests. To further improve the efficiency these members are grouped together based
on their location, so peer groups in a locality are identified. This peer group identification
can help in connecting those people with similar interests in a locality.
1 INTRODUCTION
The exploding growth of data has always opened new opportunities for those who indulge.
The role of social networks in the life of a person is unfathomable these days. In fact, the
scenario is like ones virtual friend knows about them more than their own friends and fam-
ily. So the behavioral data of an individual is all available in his social network accounts.
The right assessment of his social media activity can be helpful to identify his character
and interests.
Twitter is a microblogging social network where most of the users express their opinion on
some trending topics. The tweets they post or share will help us find the interest of the users.
The idea here is to identify a group a people with similar interest. The tweets generally allow
text and URLs only. So the URLs play a significant role in the characteristics of a tweet
(Zhang et al. 2016). To be able to identify the common URLs shared between the users will
be an important measure in establishing a relation between similar users.
Being a very popular social media network it has its disadvantages as well. It is also sub-
jected to spam attacks. The spamming in social networks can cause much more harm than
traditional spamming like email spamming (Grier et al. 2010). There will be a lot of ingenu-
ine users with fake profiles who like to promote illegitimate contents. So when we are to peer
groups based on their social network activity we should be able to genuine accounts from
spammers efficiently. After filtering out the spammers we need to rightly group the authentic
users with similar interests and then we can apply geotagging techniques to group them based
on their geographic coordinates.
In this article, we are here to identify peer groups in a locality based on their Twitter usage.
For this purpose, we are making use of the URLs that are part of their tweets. So as the first
step we need to connect the users which share similar interests based on an account similarity
907
2 RELATED WORK
Much related works have been done in identifying spammers in a social network. Since most
of the efforts lie in, correctly classifying normal and abnormal groups based on their activi-
ties, here importance is given to rightly classify spammers and non spammers. Most tradi-
tional methods try to detect the spammers based on their messages sent or the activities of
their account. The message level detection (Benevenuto et al. 2009) checks each tweet posted
for any discrepancies or spam contents in any URLs mentioned. But this may require a real-
time processing as umpteen tweets are getting posted every hour.
The account level detection methods (Lee et al. 2010) examine the activity of the user
accounts, such as whether they have promoted spam contents, to find the authenticity of an
account and thus identify if it’s a spammer or not. But both of these methods have left many
spams unidentified at the end of the evaluation. Instead of classifying individual messages
and accounts, some papers have proposed identifying spam campaigns. A campaign refers to
a group of accounts who purposefully work towards the same goal. Spam campaigns often
contain accounts which post harmful information such as malware, virus etc.
(Hatanaka & Hisamatsu 2010) proposed a method to group users into distinct blacklist
groups based on the degree of similarity in their bookmarks and they reduced the rank of the
bookmarks promoted by this blacklists. A detection framework for spam campaigns based
on the similarity of the URLs shared by the accounts is proposed by (Gao et al. 2010).
This framework quantifies the similarity measure between accounts based on the URLs they
share, to draw a similarity graph and also put forward some characters of spam campaigns.
(Lee et al. 2010), (Lee et al. 2013) has proposed a content-driven approach for identifying
spam campaigns and categorizing it. This method employs a strategy to group users based
on text similarity but classification is done by manual inspection.
(Zhang et al. 2016), (Zhang et al. 2012) proposes a multilevel classification method for
identifying spammers and non-spammers where the first level includes classification into
normal and abnormal campaigns and further classification to identify spam and promotion
campaigns. This method makes use of the similarity measure between the users to plot a
similarity graph. The similarity measure was calculated based on the common URLs shared
by the accounts and later they extended the work to consider the timestamp as well. (Jiang
et al. 2016) identify the importance of finding peer groups in a social network for a friend
recommendation system.
3 PROPOSED METHOD
In this section, a new method is proposed for identifying peer groups in a locality based
on the Twitter activity of the users. First, The account similarity is estimated based on the
common URLs shared by them. A similarity graph is plotted based on these measurements
and then we have to extract potential campaigns from it. Secondly, for classification of this
campaigns into spammers and non spammers, we have to apply machine learning techniques
and classify them into normal and abnormal campaigns. In the final step, now that we have
obtained normal campaigns and filtered the spam campaigns, we have to make use of the
geotags to group the users in a normal campaign according to their locality. Thus we will
obtain peer groups in a locality.
The process flow is depicted in 1.
908
#u
P(u ) = (2)
N
where # u is the number of tweets containing URL u in the corpus and N is the number of
all tweets containing URL(s).
In order to calculate the amount of information contained in all URLs shared posted by
account ai, we make use of the below formula,
I a (i ) = ∑ Num (u ) * I (u )
u ∈U i
i (3)
where Numi (u) is the number of tweets containing URL u posted by account ai. The amount
of information shared by accounts ai and aj through the sharing of common URLs is
collectively summated as
I a (ij ) = ∑
u ∈U i ∩U j
( Numi (u ) + Num j (u )) * I (u ) (4)
I a (ij )
Sij = (5)
I a (i ) + I a ( j )
where 0 ≤ Sij ≤ 1
Now that we have obtained the measure of the similarity between various accounts, we
need to plot a graph combining the accounts which has a similarity measure above a particu-
lar threshold. The obtained graph is used to identify potential campaigns. The campaigns are
those areas in the graph which are very dense.We identify them using the concept of maximal
909
4 CONCLUSION
The peer group identification based on Twitter is proposed and the methods involved for
each stages are detailed. We have used URL based estimations for finding similar accounts
and created a graph. The cohesive campaigns are then extracted. The extracted campaigns
are classified using some very important features and categorized as normal and spam cam-
paigns. The accounts in each normal campaigns are further categorized on the basis of their
location and we obtain peer groups in a particular location.
The twitter analysis throws insight to the nature of the twitter user and can help him to
connect with people having similar thoughts. This can be helpful to various recommendation
systems that work on a location specific basis.
REFERENCES
Benevenuto, F., T. Rodrigues, V. Almeida, J. Almeida, & M. Goncalves (2009). Detecting spammers and
content promoters in online video social networks. In Proceedings of the 32nd International ACM
Conference on Research and Development in Information Retrieval(SIGIR09) ACM, 620–627.
910
911
Sreemol Sujix
Computer Science and Engineering, Thejus, India Engineering College, Thrissur, Kerala, India
Keywords: Route choice analysis, Smart card data, Data mining, Big data
1 INTRODUCTION
Metros have become the most demanded mode of transport for passengers due to their
speed, efficiency, time management, comfort, capacity to accommodate more passengers,
and so forth. It has become a necessary piece of infrastructure for a growing metropolitan
city. The use of metros has not only helped in decreasing road traffic but has also paved the
way to pollution-free transport when compared to cars and other vehicles that pollute the air
by emitting harmful carbon monoxide, which can create holes in the ozone layer. Therefore,
using metros is more advantageous, safe and eco-friendly.
The pattern of traffic in a metro is usually very complex because the trains and routes
chosen by a passenger are unknown. Route choice analysis is a study that is related to the
distribution of passengers in the different routes and the trains chosen by them. Dealing with
such abstract and diverse data to infer the required information and modeling of route choice
behavior are two major challenges faced in public transport management.
The emergence of big data analytics has helped to store, process and manage this complex
data, whereas traditional data processing applications are inadequate to handle it. Conduct-
ing route choice analysis is of primary importance to both passengers and metro operators.
For train operators, this analysis will help them to understand how passenger flow takes place
in the metro network and hence improve service reliability. For metro passengers it will be of
great use in trip planning. Indeed, this study can help urban administrators in route sugges-
tions and managing emergency situations.
A metro generally provides its passengers with a smart card facility. A smart card is a
pocket-sized card with an embedded circuit. Every time a smart card is swiped at the station
gate, details of the trip being made with that card are recorded and the monetary value is
stored and debited from the card. This smart card data is used for data collection processes
and hence contributes to the analysis of travel behavior.
In this paper, the probability of passengers choosing a particular route for an Origin to Des-
tination (OD) pairing with multiple routes is shown in Figure 1. Here, big data analytics have
been employed to deal with such vast and complex data. The Hadoop framework has been
preferred for this implementation because it supports batch processing on enormous amounts
of information. The Hadoop framework consists of a distributed file system and a MapReduce
913
function. These two elements of Hadoop can be used to enable the storage of huge amounts of
data and for performing parallel processing to save time and improve efficiency.
2 BACKGROUND
Traditional approaches are no longer scalable. The old method of route choice study was to
collect the information from surveys conducted by asking passengers about their routes and
trains. This method was a tedious process and could not yield the best possible results as it
was limited to persons, places and times.
Automated Fare Collection (AFC) systems were then used for the analysis, which gave
broader information regarding the travel pattern of passengers. A drawback related to these
AFC systems was that they did not give any information regarding the train and route chosen
by passengers but just provided the details of the origin and destination stations traveled by
the passengers. In earlier studies, the walking time between the swiping gate and the platform,
and transfer time between platforms, were ignored. These have been considered in this study.
Here we consider three cases and make a comparative study. In one case, route choice
analysis is done using smart card data only; another case study is done using smart card and
timetable data, and the final study uses smart card data, timetable data and MCL data, which
is obtained by means of conductor checks.
3 LITERATURE SURVEY
914
Huge quantities of passive data streams are collected by smart cards, GPS, Bluetooth and
mobile phone systems all over the world. This data happens to be very useful to transport
planners, because of the valuable spatial and temporal information it contains. Big data plays
an important role in storing and processing huge amounts of data in a way that existing
systems cannot. The classification of data as big data is based on volume, veracity, velocity
and value. Volume refers to the exponentially rising amount of data. Variety refers to the
multiple sources from which the data is emerging. The data so obtained are of different kinds
and can be classified as structured data, which includes tables, Excel spreadsheets and so
on, semi-structured data, consisting of cstv, emails, XML files and so on, and unstructured
data, which includes video, images and so forth. Data is being generated at an alarming rate.
Velocity refers to the speed at which the data is produced. Value refers to the mechanisms that
derives correct meanings from the data that is extracted. Veracity considers the difficulty of
extracting such value from the data and helps in handling this.
3.2 Hadoop
Hadoop is one of the frameworks used to handle big data. It consists of a distributed file
system for storing large amounts of data and a MapReduce function to perform parallel
processing on the collected data by assigning work to each processor connected in the net-
work. Hadoop is an open source framework and provides distributed storage and computa-
tion across clusters of computers.
915
4 METHODOLOGIES
4.1.1 Datasets
The route choice analysis conducted here utilizes smart card data as well as timetable data.
Timetable data is maintained by the stations to inform passengers of train numbers, train
routes, arrival times, departure times and the metro line name or number. Clubbing both the
dataset the route choice study will be enhanced.
916
917
where Pr (xq.e|Tab, xq.b, Rz) represents the possibility that a passenger xq passes through
exit gate at time xq.e on condition of Tab, xq.b and the route chosen, Rz. So, Pr(xq.e|Tab,
xq.b, Rz) can be calculated by summing up the probabilities of all plans.
The smart card data which acts as the input for the route study includes several kinds of
errant data such as missing data, duplicated data and data with logical errors. In the data
pre-processing step this erroneous data will be filtered out and the trips will be extracted
from it. In the next step in generating the route, a dataset from another input, that is, the train
timetable, and certain shortest path algorithms are used. The routes which are not used in the
OD pair are also filtered. Finally, the trips are classified as follows:
• direct route;
• one-transfer route;
• multi-transfer route.
The trips such as the direct route and the one-transfer route help to estimate the value of
θ and β, respectively. With the known θ and β value, the probability of choosing each route
in an OD pair is calculated.
4.2 Analysis using smart card, train timetable and MCL data
The route choice study of passengers provides operators with the opportunity to improve
their passenger service. Analysis using smart card data and train timetable data only gives
information regarding a trip’s origin and destination but not the train information. So, in
order to make an accurate prediction we prefer to consider the smart card data, the train
timetable, and the conductor check data.
The conductor check data is the extra information which is collected from a passenger dur-
ing his or her trip on the metro. Metro conductors check a passenger’s ticket using a mobile
918
4.2.3 Validation
Due to the availability of conductor checks on the specific day d, it is possible to evaluate the
performance of the analysis by comparing the selected route with MCL data and filtering
the required route.
919
920
Table 1. Proportions.
Group VF RF GF LF
Group 1 st 2 nd 3 rd 4 th
TGrp1 VF LF LF LF
TGrp2 VF VF LF LF
TGrp3 RF GF LF LF
TGrp4 LF LF LF LF
921
Class-1 1 2 1
Class-2 2 2 2
Maximum likelihood
estimation K-means clustering Bellman-Ford algorithm
Estimates the likelihood of Partitions data into clusters and Finds effective paths chosen based
route selection. uses the joint and conditional on minimum path, cost, etc.
probabilities for estimation.
Uses smart card and train Uses smartcard data. Uses smartcard data, train
timetable data. timetable and MCL data.
Applicable to different Applicable to all data but Applicable to network models.
types of data. outliers don’t work well.
Precise estimation. Simple and efficient method. Maximizes performance.
Class-1 represents those who take the metro on one trip and take a bus in another trip;
Class-2 represents those who take the metro in round trips, as shown in Table 4. The compari-
sons of average cost and travel time across five weekdays are shown in Figure. It is clear that
Class-1 is lower than Class-2 in terms of cost and travel time. It was learned that, in Shenzen,
the average cost of taking the metro is higher than taking the bus and, for economic reasons
and if time permits, some passengers will choose the bus.
5 ANALYSIS OF METHODS
A comparison of route choice analysis methods using smart card data is presented in Table 5.
6 CONCLUSIONS
With the development of computer technology there has been a tremendous increase in the
growth of data. Big data analytics is an emerging field in intelligent transportation systems nowa-
days. The objective of this paper was to make a study on route choice behavior using smart card
information, which helps in analyzing the travel patterns of the passenger, their route, and their
train selection. This inference will, indeed, be useful for operators in terms of improving the serv-
ices given to passengers and will also be helpful to passengers in trip planning. A comparative
study has been conducted here to understand the route choice behavior of passengers.
REFERENCES
Agard, B., Morency, C. & Trépanier, M. (2006). Mining public transport user behavior from smart card
data. IFAC Proceedings Volumes, 39(3), 399–404.
Kusakabe, T., Iryo, T. & Asakura, Y. (2010). Estimation method for railway passengers’ train choice
behavior with smart card transaction data. Transportation, 37(5), 731–749.
Trépanier, M., Tranchant, N. and Chapleau, R. (2007). Individual trip destination estimation in a transit
smart card automated fare collection system. Journal of Intelligent Transportation Systems, 11(1), 1–14.
Tsai, C.W., Lai, C.F., Chao, H.C. & Vasilakos, A.V. (2015). Big data analytics: A survey. Journal of Big
Data, 2, 21.
White, T. (2012). Hadoop: The definitive guide (3rd ed.). Sebastopol, CA: O’Reilly Media.
Wu, X., Zhu, X., Wu, G.Q. & Ding, W. (2014). Data mining with big data. IEEE Transactions on Knowl-
edge and Data Engineering, 26(1), 97–107.
922
Architectural applications
ABSTRACT: This paper outlines the process by which the ecological conservation of the
site of the Muziris Interpretation Center and the maritime museum at Pattanam, North
Paravur was proposed, through site responsive design. The proposed site was unique, with
a major part of the site being an unrealized pisciculture plot (traditional ‘chemmeenkettu’).
The ecological conservation of the ‘chemmeenkettu’ was done after conducting buildability
studies of the site and formulating a proposal for the future realization of the site to its full
potential. The main museum was designed as a floating structure, with minimum damage to
the site and utilizing only a percentage of the waterlogged area. The museum was designed
as a climatically responsive building by using renewable energy systems in floating structures.
1 INTRODUCTION
The Muziris Heritage Project aims at reinstating the historical and cultural significance of
the legendary port of Muziris. The region is dotted with numerous monuments of a bygone
era that conjure up a vast and vivid past, and the Muziris Heritage Project is one of the big-
gest conservation projects in India, aiming to preserve a rich culture that is around 3,000
years old. The material evidence unearthed at the excavation site, located about 25 km north
of Kochi, points to the possibility that Pattanam may have been an integral part of the legen-
dary Port of Muziris, and thus the interpretation center at Pattanam helps to promote aware-
ness and understanding of the cultural distinctiveness and diversity of Muziris.
The Muziris, located along the west coast of Kerala, has the most productive waters in the
world. It paved the way for the supreme diversity and abundance of both fishes and fisher
folk. The indigenous and conventional methods of fishing are rapidly declining and are in
need of conservation. Brackish water fish farming and cage fish farming are unique methods
of culturing fishes and these eco-friendly methods of fishing provide livelihood security for
the fisher folk in the region, as well as enhancing its biodiversity by supporting varied eco-
systems. The proposed site for the interpretation center is one such, traditionally known as a
‘chemmeenkettu’.
925
The site is located at Pattanam, North Paravur, in Kerala, with an area of 49 acres and 92.84
cents. It is located about 1 km from the Kodungallur–Paravur route. The site is along the
banks of the Kollam–Kottapuram waterway (National waterway III), encompassing the west
and south side of the site.
926
ing. Some private residences around the site are mostly MIG (Middle Income group) and
LIG (Low income Group) occupancy with single storied flat roofs.
927
928
Figure 9. Site analysis (a) views to the site (b) views from the site (c) access and entry to the site (d)
circulation and road hierarchy within the site.
929
4.2 Architectural language achieved: Site responsive design considering local climatic factors
Considering the tropical climate, natural ventilation, a sloping roof, long overhangs, passive
and active cooling through orientation, and vertical louvers were achieved.
The natural flow of air is achieved by having full length vertical louvers on the river-facing
side. The building is oriented so that the longest side is facing the wind direction.
The building is L-shaped with courtyards in the middle, which will act as a pressure point
for catching the wind. Further, clerestory windows are also provided to allow the hot air to
rise and flow outside, thus maintaining a circular air circulation system within the building.
930
931
6 CONCLUSION
The Muziris Interpretation Center and maritime museum is a conscious effort to achieve
a site responsive design, stressing the importance of sustainable and eco-friendly architec-
ture. The proposal is aimed at inducing a deep appreciation for the lost heritage of Muziris,
through understanding the context of the site and celebrating the essence of its uniqueness.
The aspect of conservation leads the project by challenging the conventional construction
practices.
One of the most challenging aspects of the design was the distinctiveness of the site. As the
site is a pisciculture plot, the focus was on nurturing and protecting the natural setting. The
site buildability studies and analysis revealed that a larger part of the site was better left as it
is, and this can be developed further in the future for aqua tourism. On the remaining plot,
the museum was proposed as a floating structure with minimum intervention to the site. Each
gallery was visualized as an island floating amidst the tranquil setting, connected through
bridges. This not only makes the museum stand out, but elevates the visitor experience.
An ecological and climatically responsive design practice ensures maximum sustainability
and retains the natural setting of the site. Further, renewable energy sources, such as solar
and hydrothermal energy, are also incorporated in the design, in order to uplift the design.
REFERENCES
Adler, A.D. (1998). Metric handbook planning and design data (3rd ed.), Routledge; 5 edition (5 March
2015).
Cherian, P.J., Selvakumar, V., and Shajan K.P., 2007a “The Muziris Heritage Project: Excavations at
Pattanam- 2007”, in Journal of Indian Ocean Archaeology, New Delhi, Vol 4, pp. 1–10.
Cherian, P.J. (2014). Unearthing Pattanam, catalogue for 2014 exhibition. National Museum, New
Delhi.
Habibi S (2015) Floating Building Opportunities for Future Sustainable Development and Energy Effi-
ciency Gains. J Archit Eng Tech 4:142. doi:10.4172/2168-9717.1000142.
Kuriakose, B. (2009). Conservation development plan for Muziris heritage sites (Consulation draft),
Chennai.
932
ABSTRACT: Daylighting is one of the measures adopted to take advantage of the climate
and environment in designing buildings. Here emphasis is given to the strategies that can be
adopted to bring daylight effectively to the tropical high rises thereby reducing the energy
consumption due to artificial lighting. The tropical climate differs substantially from condi-
tions of other climates. These differences should be taken into account while designing the
facades of high rises in this region. This paper mentions some of the terms regarding the
daylighting components, daylighting systems, principles and their analysis in the context of
tropical high rise. It also analyses some of the case studies and derives design strategies that
can be adopted for daylighting in tropical high rises.
1 INTRODUCTION
Daylighting is the controlled admission of natural light, direct sunlight and diffused sky-
light into a building to reduce electric lighting which in turn saves energy of the building.
If used efficiently, it helps to create productive environment for the building occupants as
well as reduce one third of the total building costs. Daylighting design focuses on how to
provide enough daylight to an occupied space without undesirable side effects like heat gain
or glare. Therefore it balances heat gain or heat loss, variation in daylight availability and
glare control.
The main aims in daylighting a building are to (1) get significant quantities of daylight as
deep into the building as possible, (2) to maintain a uniform distribution of daylight from
one area to another, and (3) to avoid visual discomfort and glare1.
2 DAYLIGHT MEASUREMENT
1. Lee Jin You, Roger, Lee Ji Hao, Theophilus, Sun and architecture, heavenly mathematics.
2. The European Commission Directorate-General for Energy, Daylighting for buildings.
933
Figure 1. Azimuth and altitude. Figure 2. Angle of the sun in summer and
Source: Lee Jin You, Roger, Lee Ji Hao, winter.
Theophilus, Sun and architecture, heavenly Source: http://www.build.com.au/window
mathematics. orientation-and-placement.
3. https://en.wikipedia.org/wiki/Daylight_factor.
4. Lee Jin You, Roger,Lee Ji Hao, Theophilus, Sun and architecture, heavenly mathematics.
934
Singapore located on the equator has on average 6 hours of sunshine in a day and high pro-
portion of clear and partly clear blue skies. The sun path in the figure below shows that the
sun is typically high overhead and traces a path that is almost directly East to West5. Hence
to block the direct solar penetration on North and South facades, horizontal shading may
be effective. The East and West facades will be exposed to low angle sun for few hours in the
morning and evening which need not be treated.
A clear sky indicates high luminous intensity in the zone directly adjacent to the sun.
Hence skylight from a clear sky often penetrate deeper into a building which may also cause
glare to the occupants that have direct view to the sky. So in such regions, high light transmit-
ting glazing should be used in combination with the blinds.
There are many new design opportunities which respond to climate and solar path of
the tropics. A good understanding of sun and sky luminance is necessary to evaluate these
opportunities. The statistical information on the sky distribution of tropics is very little and
more research is needed to be carried out in this area.
Design strategies for daylighting includes daylight optimized footprint, efficient window
design, high performance glazing, passive or active skylights, tubular daylight devices, day-
light redirection devices, solar shading devices, daylight responsive electric lighting controls,
daylight optimized interior design which includes furniture design, space planning and room
surface finishes.
935
Table 1. Minimum glazed areas for view when windows are restricted to one wall.
Depth of room from outside wall (max) Percentage of window wall as seen from inside
936
937
Trees Planted serve as shading devices and beautifies landscape and provides oxygen to
the occupants. Internal shading devices also help to create a sense of privacy. One of the dis-
advantage of using shading devices is that it obstructs outdoor views for occupants in some
cases. The solar geometry explains that the publicity of each facade to the sun is specific, and
varies through orientation.7 Each façade should be treated differently. For example, facades
facing north in the northern hemisphere would not need shading devices as solar penetration
is restricted to only few months of summer. Whereas in south elevation, solar penetration
should be controlled. Horizontal shading devices above windows are best suited here. The
length of the projection depends on the height of the window and the attitude of elevation of
the solar at sun noon. It should be designed in such a way that it absolutely get rid of solar
penetration in summer time and allows complete solar penetration in winter.
Steps to consider while designing shading device involves 1 Understand the sun path of the
environment 2 Select the shading type – Horizontal, -vertical, -egg rate 3 Identify category-
Fixed shading devices, – Adjustable shading device, – Movable shading, device-Dynamic
shading, device-Automatic shading device 4 Calculate the design dimensions – To under-
stand horizontal and vertical shadow angles.8
7. Mustapha Adamu Kaita, Dr. Halil Alibaba, research paper on Shading Devices in High Rise Build-
ings in the Tropics.
8. Mustapha Adamu Kaita, Dr. Halil Alibaba, research paper on Shading Devices in High Rise Build-
ings in the Tropics.
938
Source: Canada-Daylighting-Guide-shading-5.jpg.
After the literature study, how these strategies have been adopted practically has to be evalu-
ated. For the purpose, 5 different tropical high rises have been selected and analysed in terms
of daylighting strategies adopted.
939
Its staggered placement creates visual interest from exterior while neatly concealing a/c
condenser units and services in the background. The sun screens are specially coated with
metallic bronze colour.
941
6 CONCLUSION
From the above study, we can understand that, in the case of high rises, design strategies for
day lighting has to be adopted mainly in the building footprint and the façade treatment.
Although we can find a lot of day lighting devices, the ones that we can use in high rises are
limited. For example, courtyards with skylights are one of the major day lighting devices
used in low rises which is not possible in high rises due to its tunneling effect.
The day lighting devices that can be adopted in high rises of tropical region mainly include
the use of balconies, sky gardens, air wells, garden pockets splitted atrias etc... These strate-
gies have many functions like daylight penetration, sun shading, maximize outdoor views,
acts as relaxation space and provides natural ventilation too. More than efficiency of window
design, balcony design is emphasized in high rises.
Another point to be considered while designing the shading devices are that it should be
designed keeping in mind the angle of the sun that changes with increasing height in high
rises. The shading devices adopted in 4th floor will be different from that on the 60th floor.
From the case studies we can find that another important strategy used in high rises are
the high performance glazing mainly in south and west facades to allow daylight penetration
without solar gain. North south orientation which is desirable for daylight penetration is
given less importance and the buildings are oriented in such a way that outdoor views like
view to the sea, rain penetration etc … are given importance like in Marina bay sands and
Kachenjunga apartments.
We can also find splitted atrias as in case of Kohinoor apartments which can be used effi-
ciently as a common entertainment space as well as a day lighting device.
REFERENCES
Adamu Kaita, Mustapha,. Alibaba Halil, (2016, October) Shading Devices in High Rise Buildings in the
Tropics, International Journal of Recent Research in Civil and Mechanical Engineering, Vol. 3, Issue
2, pp: (37–46).
Ander, Gregg, (2016) Daylighting, Whole building design guide, National Institute of Building sciences.
Daylight_factor (2013, July) retrieved from https://en.wikipedia.org/wiki/Daylight_factor.
H. Schepers, Mc. Clintock, J. Perry; Daylighting design for tropical facades.
Lee Jin You, Roger; Lee Ji Hao, Theophilus; Sun and architecture, heavenly mathematics.
Retrievedfrom https://identityhousing.wordpress.com.
Wong, Mun Sum (2012), Case study—The Hansar, Bangkok, CTBUH journal.
Shikre, Sandeep,Kohinoor square, a multifaceted development.
Sun orientation-and-placement (2013) retrieved from http://www.build.com.au/window Sun
orientation-and-placement.
The European Commission Directorate-General for Energy, Daylighting for buildings.
942
Om Prakash Bawane
R V College of Architecture, Bangalore, India
1 INTRODUCTION
The subject of ‘Building Construction and Materials’ occupies an important place in the
under graduate architecture curriculum. In terms of the number of teaching hours and cred-
its, the subject is placed second in order next to the core subject of architectural design.
A sequential exposure to the process of construction through seven to eight semesters of
structured syllabi helps students appreciate various aspects of the constructability of a design
idea. Construction is the process and means of transforming the architectural ideas into a
built product. In architectural projects, methods and materials of construction could be vital
elements in shaping the overall design concept. Architects are trained to be conscious of this
fact and exercise their prerogative on the selection of construction systems and materials
that would enhance the architectural quality of the build environment. Ironically, any act
of construction has a bearing on the natural environment and ecology. The design idea and
visualization of the overall build environment are what finally influence the architects’ deci-
sions on methods and materials of construction.
943
2 UNDERSTANDING SUSTAINABILITY
The word sustainability finds it origin in the Latin word sustenere meaning to hold, bear,
endure, etc. [5]. The term sustainable has a much philosophical connotation meaning any
concept, product or process that would not undermine the capacity of the earthE’s ecosystem
to maintain its essential functions. In context of the definition of sustainable development
advocated by the World Commission on Environment and Development, sustainability is
perceived as a holistic concept encompassing social, environmental and economic aspects
also referred to as the three pillars of sustainability. The training of architects needs to recog-
nize the most sensitive aspect of sustainability, namely the environment, since construction is
one of the major consumers of natural resources and at the same time a major pollutant of
the natural environment.
3 COURSEWORK IN CONSTRUCTION
The contents and structure of coursework in the subject of ‘Building Construction and
Materials’ are found to be long-established in most institutions. In certain universities
the sequence of exposure aligns with the sequence of onsite construction processes, for
944
945
Natural clay, natural clay blended with sand and Substitute with conventional mortar.
lime, quarry dust with lime/cement
Natural soil blocks made of laterite, stabilised Masonry work.
mud blocks, fly-ash walling blocks
Bamboo and agricultural waste Roofing trusses, roofing, walling and flooring
construction.
Dry construction Construction of walls, roofs and floors using
precast and prefabricated components.
Alternative materials Hollow clay roofing and walling, soil-cement
compressed walling and roofing techniques,
ferro-cement concrete, etc.
Recycling of construction and demolition waste Reuse of walling blocks, timber components,
metallic fixtures and concrete waste, etc.
Recycling of waste timber, glass, plastics,
concrete etc to produce new recycled walling,
flooring and roofing materials.
Vernacular Techniques Adobe construction, Dhajji wall construction.
The course content in sustainable construction can be carefully structured by identifying the
materials and methods for traditional and contemporary practices. Relevance of indigenous
technologies needs to be re-established and the curriculum should serve as an instrument to
transfer the environmental construction technologies to the field. Table 3 identifies materials
and methods that are deemed sustainable; however the list is only a representative one.
The issue of sustainability in construction syllabi needs to be pursued more aggressively. The
course curriculum in sample institutions offer subjects titled as, energy efficient buildings, sus-
tainable architecture/planning, Green architecture/buildings. However, only one such course
of semester duration is made available, in many cases as an elective at higher semesters. These
courses provide only an overview of the subject. The concept and content of sustainability
needs to be entwined across the width and the depth of the entire course curriculum.
6 CONCLUSION
Climate change, rise in sea levels, rise in the earth’s average temperature and ozone layer
depletion are some definite indicators of the extent of the harm that has been caused by
946
REFERENCES
947
1 INTRODUCTION
951
State intervention in the housing sector began in Kerala during the 1950s. Many innova-
tive housing programs were developed in congruence with the national policies, raising hope
among the homeless poor of becoming house owners. A housing boom began in the state
in the mid-seventies. Public housing schemes have also had an impressive record during the
past two decades in terms of investment and physical achievements (Gopikuttan, 2002). As
the gap between need and supply decreased, the inequality in housing conditions widened in
Kerala. The poor have become progressively incapable of self-help and mutual help for solv-
ing their housing problems due to various reasons. Thus they have become dependent on a
supporting agency for the execution of their housing projects. The absence of professional
agencies to take up such roles has created unfortunate situations, such as the mismanagement
of funds, poor access to infrastructure, etc.
In India, 32% of rural households live in kutcha structures, but the figure reduces to 19%
in Kerala (Panchayat Level Statistics, 2011). Indira Awaas Yojana (IAY), with its inception
in 2007, has been the most successful housing scheme in Kerala, decreasing the number of
homeless and also establishing the policy of gender mainstreaming. The LIFE mission is
the newest housing scheme for the economically disadvantaged and homeless population
in Kerala as envisioned by the state government. The LIFE mission survey enlists two lakh
homeless populations in Kerala to benefit in the next five years in order to fulfill the national
policy of Housing for All 2022. The research objective of the paper is to explore the reasons
for the incompleteness of houses in Vellanad and summarize strategies to solve it.
3 METHODOLOGY
As part of the LIFE housing mission of the government of Kerala, it was decided, as a first
step, to complete all of the incomplete houses sanctioned under the earlier public housing
schemes. A socio-economic study was done by 18 postgraduate students of Planning (Hous-
ing) at the College of Engineering Trivandrum, guided by two faculties of the department.
The survey was conducted at the block panchayat area to assess the condition of incom-
plete houses and to examine the reasons behind them. Initially, a pilot assessment of five
houses was conducted and a detailed questionnaire was developed. After this, the question-
naire was modified to include all of the necessary data. The research team conducted a two
952
953
highest in the year 2010–11. The ownership shows a changing pattern, with male owner-
ship decreasing from 70% to 61% and female ownership increasing from 28% to 38% of
the total number of houses completed during the years from 2006 to 2011. The number of
pucca houses in Vellanad shows an increasing rate from 66% in the year 2006–07 to 82%
in the year 2010–11. Kutcha houses increased to 26% during the year 2007–08, but have
decreased to 18% during 2010–11. The primary survey summarizes that the national housing
mission, Indira Awaas Yojana (IAY), has been actively executed in the Vellanad block dur-
ing 2010–11, thus affirming that IAY has been able to effectively improve the rural housing
scenario in Vellanad by completing a high number of pucca houses with female ownership.
The year 2016 saw a change in the strategy of rural housing when the government of Kerala
launched its exclusive housing mission named LIFE (livelihood inclusion financial empower-
ment), which incorporates livelihood inclusivity and financial empowerment as its strategies.
The LIFE mission also includes the landless population and dilapidated houses. This mission
will empower the beneficiaries to manage to have a pucca house and increase their living
conditions by the year 2022, coinciding with the national mission of Housing for All 2022.
Post survey, the team met and had a group discussion to exchange the observations they had
made regarding their grama panchayat.
Certain common characteristics could be identified regarding the issues and, thus, the
analysis criteria were decided based upon them. Under the IAY scheme, 73% of the incom-
plete houses were allotted. Various analytical graphs were generated to analyze the physical
infrastructure, as well as the socio-economic conditions (Figures 2, 3).
954
955
6 CONCLUSION
Vellanad represents a rural area in Kerala. The majority of the population is engaged in agri-
culture or related activities. It exhibits a variety of study options along with an active housing
mission status. Four reasons have been identified that affect housing schemes for the poor in
Kerala: (1) inappropriate plot allocation, (2) lack of expert professional advice, (3) the age
and health of the beneficiary, and (4) reduced beneficiary participation. The identified issues
can be solved through two strategies. A rework in the team leading the housing scheme is
one solution, by including urban planners or housing experts to guide the site selection as
well as in preparing appropriate floor plans. Importance is not given to associated home-
based income generating activities, which are usually undertaken by the poor. The research
leaves scope for further interrogation into aspects such as the scope of livelihood inclusion
and methods to incorporate alternative building materials and technology into similar rural
housing development. It is hoped that such studies are undertaken before formulating hous-
ing schemes.
REFERENCES
956
ABSTRACT: Solid waste is one of the biggest issues in wetland regions worldwide due
to their high-water content and scarcity of land. The inefficiency in the waste management
often leads to pollution of the environment and subsequent degradation of such regions. The
study examines the problems prevalent in the Kuttanad Wetland Region, which is known as
the ‘Rice Bowl of Kerala State’. It is observed that solid waste and its improper handling have
emerged as one of the biggest challenges in this region which is affected by severe deteriora-
tion in its water sources and pollution of its natural elements. The study is based on primary
and secondary data collected from the region and presents a profile of successful solid waste
management strategies through the case discussion of The Netherlands. The paper recom-
mends strategies for solid waste management in wetland regions which could lead to a plau-
sible sustainable development of the region.
1 INTRODUCTION
Municipal Solid Waste (MSW) is one of the biggest challenges in the development of urban
pockets worldwide. The composition of municipal solid waste varies greatly from municipal-
ity to municipality, and it changes significantly with time (Kumar, et al., 2016). The process
of waste management includes an array of tasks which are focused towards generation, pre-
vention, characterisation, monitoring, treatment, handling, reuse and residual disposition
of solid wastes. As compared to the natural ecosystem and the ecological cycle, waste man-
agement focuses on the waste generated from the man-made activities and processes. This
process faces a lot of challenges in situations like India with increasing population, squalor
and dwindling economic conditions. The system of governance is also a big hurdle consisting
of corrupt practices, rapid unplanned urbanisation and lack of resource management. The
development trend has exerted more pressure on the existing system which requires better
management and technological interventions. The management of waste in a wetland region
brings a set of uncertain factors for effective execution. The scope of the research includes
solid waste disposal strategies pertaining to a wetland region. The study is based on primary
and secondary data collected pertaining to the Kuttanad Wetland Region (KWR) which sur-
rounds the Vembanad Lake.
957
Table 1. Municipal solid waste generation as per the standard norms (based on reports from NEERI,
1996; Varma & Dileep, 2004; SEUF, 2006).
ontrol Board puts the State average figure at around 249 grams per day per person (CPCB,
C
2012). It is observed that there is no drastic change in the total MSW generation in the region.
The waste sourced from domestic sources forms the majority (48 per cent) of the MSW, fol-
lowed by commercial establishments, hotels and restaurants, street sweepings, with waste
coming from markets, construction and demolition activities, institutions/schools, proving to
be the other important components. In the major cities of the State, around 80 per cent of
the waste is compostable organics enabling high level of recycling in the form of manure or
fuel (KSUDP, 2006 & Varma, A., 2009).
The Kuttanad Wetland Region (KWR) is spread over 1471 square kilometres, and is a unique
site where below sea level paddy farming is conducted in India. It is also recognised as a
Globally Important Agricultural Heritage System (GIAHS) by FAO (Food and Agricul-
tural Organization) of United Nations. The KWR is spread over three districts (Alappuzha,
Kottayam and Pathanamthitta) with the river waters reaching the Vembanad Lake from six
districts. The region consists 78 villages, 25 census towns and seven municipalities.
958
A comparison was conducted between the various solid waste disposal methods and their
advantages and disadvantages based on their utility in the study area. The various meth-
ods considered included Landfills, Incineration, Pyrolysis, Deep-well injection and Deep-sea
waste disposal. Based on the analysis and expert inputs, it was observed that landfilling and
incineration were more feasible options for large scale solid waste disposal strategies for the
wetland regions.
Following the analysis of the existing situation, the following recommendations are for-
warded as sustainable strategies for solid waste management.
961
5 CONCLUSION
The concept of solid waste management in wetland regions has many facets for considera-
tion. The success of the Alappuzha Municipality in adopting the Zero-Waste strategy reflects
that a series of different techniques can be combined together for effective waste manage-
ment. The future research could focus on sustainable technologies which encourage user
participation in waste management strategies. This research suggests that solid waste man-
agement strategies have to combine strong legislative procedures, modern technology and
participatory approach to achieve plausible solutions.
REFERENCES
CESS. 2001. Carrying capacity based development planning of Greater Kochi Region (GKR), Rep. Centre
for Earth Science Studies, Thiruvananthapuram. p. 269.
CPCB. 2012. Status Report on Municipal Solid Waste Management, Rep. Central Pollution Control
Board. New Delhi. p. 9.
Feller, G. 2014. Dutch Successes [Journal]. - Northbrook: Waste Management World, 2014. - 1: Vol. 11.
KSUDP. 2006. Solid waste management of Kollam, Kochi, Thrissur and Kozhikkode Corporations of
Kerala. Dft. Detailed Project Report. Local Self Government Department, Government of Kerala &
Asian Development Bank.
962
963
1 INTRODUCTION
With the increase in impervious surfaces in urban areas, the storm water runoff is overwhelm-
ing the existing infrastructure, which causes flooding and sewer overflows. Under these con-
ditions, cities are under pressure to find cost-effective, sustainable and socially responsible
solutions to urban flood management. This can be done through Green Infrastructure (GI),
which can complement or augment the present solutions for urban flood management. The
term “green infrastructure,” when used for storm water management, denotes techniques,
such as rain gardens, green roofs, permeable pavements, street trees, and rain barrels, that
infiltrate, evapotranspirate, capture, and reuse storm water onsite. GI allows for both “a
reduction in the amount of water flowing into conventional storm water systems (and thus
a reduction in the need to build or expand these systems) and a reuse of storm water at the
source.”
2 GREEN INFRASTRUCTURE
2.1 Definition
Green infrastructure refers to natural or semi-natural ecosystems that provide water utility
services that complement, augment or replace those provided by gray infrastructure. A GI
framework can be developed on any scale, including multinational, national, regional, local
community or on an individual plot. Have suggested three scales: individual, community and
statewide scales. The framework is applied depending upon the relevant goals of the com-
munity and the benefits to the environment.
2.2 Elements of GI
A wide variety of green infrastructure elements are present and, depending upon the location,
preferences, living standards and the goals which communities require, these elements may
differ. All these GI elements are intended to provide a safe environment and may increase the
economic status of the community.
The GI elements are as follows:
965
2.2.5 Wetlands
Wetlands receive and treat storm water that is drained from limited impervious areas. These
are aesthetically pleasing and suitable for small wildlife habitats. These do not require a large
amount of space and can be useful in congested urban areas. It is an effective means of man-
aging the more intense and frequent precipitation events. It helps in reducing peak flows and
reducing the intensity of flood events in urban areas.
966
967
Obviously, urban areas in more developed economies may have a higher share of built-up
areas with hard surfaces. Since the ground is unable to absorb rainwater, the water flows
directly into rivers, streams, and sewers.
The United States Environmental Protection Agency (EPA) estimates that a typical Ameri-
can city block “generates five times more runoff than a woodland area of the same size, while
only about 15 percent (of rainwater) infiltrates into the ground for groundwater recharge”.
During major rainfall events, this additional runoff overwhelms rivers, streams, and sewers
and causes severe flooding. Additional risks include drought (due to reduced groundwater
recharge and reduced surface water storage) and negative impacts on water quality.
The combination of increasing flood risk, the potential for major human and economic
losses, and the unevenness in the efficacy and costs of gray infrastructure has led to a growing
interest in exploring other approaches.
GI solutions focus on managing wet weather impacts by using natural processes. As part
of an integrated flood risk management framework, they can also deliver environmental,
social, and economic benefits, and can be cost-effective, low in impact, and environmentally
friendly (sustainable). In contrast, traditional approaches, such as levees and dams, focus on
changing the flow of rivers and streams to protect local communities, and use piped drain-
age systems in urban areas to quickly move storm water away from the built environment.
As an unintended consequence, fast drainage of water may result in drought problems, and
drought may reduce the ability of existing green spaces to provide important services, such
as reducing heat stress.
Gray infrastructure solutions remain a key component of flood risk management frame-
works and are necessary in many situations. But GI solutions can be a valuable part of an
integrated approach. GI solutions include, among others, wetlands, bio shields, buffer zones,
green roofing, tree pits, street side swales, porous pavements, and the use of green materials
(wood, bamboo, coconut nets, etc.). These measures not only help to reduce flood impacts
but also produce environmental and health benefits.
3 FLOODING IN INDIA
India is the most flood affected nation in the world after Bangladesh. It accounts for 1/5th
of the global deaths by floods every year and on average 30 million people are evacuated
every year. The area vulnerable to flood is 40 million hectares and the average area affected
by floods is 8 million hectares. Unprecedented floods take place every year in one place or
another. The most vulnerable states of India are Uttar Pradesh, Bihar, Assam, West Bengal,
Gujarat, Orissa, Andhra Pradesh, Madhya Pradesh, Maharashtra, Punjab, and Jammu and
Kashmir. A history of floods can be seen from ancient times. In the independent India, the
first major flood occurred in 1953. After this a series of floods happened every year.
968
4 GI IN INDIA
Green infrastructure has been practiced in India for a long time, but not in a co-ordinated
manner. Not all the elements of GI are implemented in a place, which can lead to multiple
and long-term benefits. Many of the elements are being successfully implemented, but are
not using the benefits of their combined use. For example, green roofs are being practiced
effectively in Bangalore, Hyderabad. The rain gardens are implemented in Samshabad air-
port. The rainwater harvesting is also an element of GI that is being widely practiced in
India. Porous pavements, on the other hand, have not been used in India until now, so their
potential for urban flood management has not yet been explored.
970
6 INFERENCE
The development pattern in India shows an increasing trend toward urbanization and there
has been a major investment in gray infrastructure. Green infrastructure has not been co-
ordinated or integrated at any levels, even though awareness about sustainable development
is on the rise. There is no special policy or strategy for Indian cities. The various elements of
GI are fragmented and the potential is not fully utilized.
1. A limit to the amount of impervious space must be incorporated in guidelines based on
the context.
2. To make it more successful, incentives and subsidies must be given to those who are
involved in such practices.
3. The cities can be categorized in the order of flood severity and the elements of GI can be
incorporated accordingly, based on the context.
4. Find opportunities in existing regulations: examine whether/how current permits and bye-
laws can cover the new activities and create awareness of the same.
5. The approach must only be limited to open spaces, roads, apartments and houses. It must
also be implemented in government buildings.
6. A town planning scheme can be the method through which GI can be integrated and can
act as a platform to improve the resources of GI.
8 CONCLUSION
Over the last two decades, urban planning orthodoxy has promoted a compact urban form
and higher densities to reduce energy consumption and the ecological footprint of cities.
971
ACKNOWLEDGMENT
REFERENCES
Chatburn, & Craig (2010). Green Infrastructure Specialist, Seattle Public Utilities. Interview by Sarah
Hammitt.
Hoang, L. & Fenner, R.A. (2015). System interactions of storm water management using sustainable
urban drainage systems and green infrastructure. Urban Water Journal, 2016, 1–21.
Lennon, M. & Scott, M. (2014). Urban design and adapting to flood risk: The role of green infrastruc-
ture. Journal of Urban Design, 19(5), 745–758.
Opperman, J.J. (2014). A flood of benefits: Using green infrastructure to reduce flood risks. Arlington,
Virginia: The Nature Conservancy.
Soz, S.A. & Kryspin-Watson, J. (2016). The role of green infrastructure solutions in urban flood risk
management. Urban Flood Community of Practice, 25(3), 12–20.
Valentine, L. (2007). Managing urban storm water with green infrastructure: Case studies of five U.S. local
governments.
972
ABSTRACT: Urban waterways represent a potential site for interaction with nature in a
busy urban environment, what we here refer to as “blue spaces”. Blue spaces need to be
viewed as amenities and given importance as green spaces. Few research has been done on
the accessibility of and interaction with blue spaces in an urban area. Through this paper, we
are trying to study how a blue space in an urban area is perceived by the nearby residents and
the reasons behind such a perception, whether positive or negative. A case of Conolly Canal
passing through Kozhikode city is studied and analyzed to find whether it is perceived as a
positive or a negative amenity and what factors lead to such a social impact.
1 INTRODUCTION
The study of different aspects of urban planning are often more complex as they need to take
a number of details into consideration. Normally, attention is given to the physical, social
and economic environment. Social environment has to be given major priority as all the
interventions are ultimately meant for social welfare. Thus research has to be focused more
on how people perceive different elements of physical environment in order to direct plan-
ning interventions in the right way.
There is a growing literature showing how proximity to urban green space can produce
improved health outcomes like reductions in obesity, diabetes and cardiovascular morbidity
(Cutts, Darby, Boone, & Brewis 2009; Ngom, Gosselin, Blais, & Rochette 2016). Urban green
spaces are not limited to terrestrial parks and open areas, but also include urban waterways.
The benefits provided by water features have been widely acknowledged, both as ecological
services (e.g., carbon sequestration, oxygen production, noise reduction, microclimates, etc.)
and as places that are used for recreation and social interaction (e.g., exercise, sport, etc.)
(Kumar 2010, Kondolf & Pinto 2016). In this paper, we are trying to explore how local resi-
dents experience an urban blue space in a sample of neighborhoods in Kozhikode.
2 BLUE SPACES
As blue spaces, we consider hydrographic features that can be waterbodies (e.g., estuaries, ice
masses, lakes and ponds, playas, reservoirs, and swamps and marshes) or flowlines that make
up a linear surface water drainage network (e.g., canals and ditches, coastlines, streams and
rivers) (USGS 2015).
In the sparse blue space literature that does exist, coastal waterways were shown to provide
quality of life benefits, and residents most frequently visited waterways closest to where they
lived (Cox, Johnstone & Robinson 2006). Another study explored distance to stormwater
ponds in Florida, finding that economically stressed census block groups in the inner-city
community tended to be located closer to stormwater ponds with less quality, diversity, and
size (Wendel, Downs, & Mihelcic 2011). Meanwhile, inland urban waterways such as riv-
ers and canals remain understudied as neighborhood amenities with potential impacts on
973
urban households. Two meta-analyses focusing on the impacts of blue space on mental health
(Gascon et al. 2015) or long-term human health (Völker & Kistemann 2011) found inad-
equate evidence due to the limited amount of empirical research on the topic.
The factors like accessibility of households to the blue space, whether they interact with or
use the blue space, purpose of visit, time spent and influence on daily lives are being studied
in this paper. A similar study in Northern Utah, United States, showed the blue space they
had considered as a positive amenity based on social perception and accessibility.
3 STUDY AREA
Conolly Canal, usually called as Canoly Canal, is the part of the West coast canal network of
Kerala and runs through the Kozhikode city. It was constructed in the year 1848 under the
orders of the then collector of Malabar, H.V. Conolly.
The canal stretching through Kozhikode town is about 11.4 km long and connects
Akalapuzha in the north and Kallai puzha in the south of Kozhikode town. The width of canal
varies between 6 to 20 m and the water depth during the monsoon ranges between 1 to 3 m.
4 METHODOLOGY
The survey was conducted by a team of 8 members, out of which 3 were engineers of
Kerala State Pollution Control Board, an official from Town Planning Department and 4
post graduate students.
The study comprised of a socio economic survey covering the residents and stakeholders
living along the banks of the stretch. The stakeholders involved included not just households,
but commercial establishments, hospitals and industries. The different types of interaction
with the blue space by the stakeholders and the factors responsible were enquired through
the survey. This paper concentrates on the social perception of the urban blue space and the
factors responsible for such a perception, positive or negative.
The canal portion to be surveyed was divided into 8 stretches, each stretch being approxi-
mately 2 km. The stretches are in between the following points as shown in Figure 2.
1. Eranjikkal
2. Kunduparamba
3. Modappattupalam
4. Ashirwad lawns
5. Sarovaram Biopark
6. Kalluthamkadavu
974
7. Mooriyad
8. Kallai
9. Kothi bridge
There are 97 industries on the banks of the canal and river stretch of which 54 were wood
based units. Also there are 501 residences and 208 commercial establishments and seven hos-
pitals. 835 stakeholders including residents and businesses located on the banks of the water-
way stretching from Elathur to Kothi estuary, were taken for the survey.
The survey characterized households on the basis of having direct access to the canal.
Respondents were asked whether the canal affects their lives or not and if yes, how? They
were asked if they use the canal so as to analyse the purpose for which it was used. The
analysis also explores why the canal is not being used or accessed. Finally, the involvement of
people in the revival of the canal is assessed.
5 RESULTS
975
976
• Drainage
• Industrial wastes
• Hospital wastes
5.6 Would you cooperate with any revival projects for Conolly Canal?
There are damages in the lining provided along the canal and this was majorly attributed to
the encroachment by properties along the stretch of canal. This kind of an illegal intrusion
might create an aversion among the people involved to refrain from cooperating with any
revival projects.
The positive response shows the concern and will of the people to retrieve health of the
urban ecosystem. The very few who restrain from opting to cooperate with any measures to
revive the canal say so as they fear losing their land through land acquisition and due to the
fear of getting caught of illegal encroachment of canal area.
6 CONCLUSIONS
The results show that the urban blue space that we have taken is perceived as a negative amen-
ity by the people of the neighborhood. The major reason for such a result is attributed to the
increase in water pollution due to improper management. People seem to be disinterested in
accessing or spending time at the canal. Though, a blue space has multiple ecological, social
and recreational benefits, the canal under study turned out to have a negative influence on
the lives of urban residents.
Similar study done in Northern Utah indicated the perception of the urban blue space
studied, as a positive amenity. This shows that an ecologically healthy or restored waterway
with public access opportunities can contribute to an aesthetically pleasing experience. On
the other hand, unmonitored or poorly managed urban waterways can be sites of flooding
risk, insect pests, pollution and/or waste disposal. Finally, even ecologically sound wetland
systems can be perceived by humans as disamenities, due to the smell of anaerobic decompo-
sition and the insect populations that thrive in them.
A blue space is an important asset for an urban environment. Planning interventions can
turn any negative influence a blue space has into positive. Planners need to promote use of
and familiarity with urban waterways in order to maximize benefits to local residents and
communities. Restoration is increasingly advocated as a strategy facilitating public access
and use of urban waterways. Infrastructure development like provision of towpaths, naviga-
tional aids, fencing and lighting can improve accessibility and safety. The water quality can
be improved through solutions like swinging weed gates, air curtains, physical removal, flush-
977
7 LIMITATIONS
The results of the study may or may not be generalizable to other regions. It will depend on
the social structure, built environment and trajectories of urban growth of the region under
consideration. The study only covers aspects of social perception of urban blue space. Also,
the study is limited to canal in an urban area. The results might change with difference in the
type of urban blue space.
8 FUTURE RESEARCH
Further research can be done considering more aspects of social, ecological and recreational
aspects. Interventions necessary to revive and elevate the potential of a blue space in an
urban area can be studied.
ACKNOWLEDGEMENT
The authors are immensely grateful to Mr. K. V. Abdul Malik, Regional Town Planner,
Kozhikode for granting access to data concerning Conolly Canal. We acknowledge the con-
tributions of faculty, Government Engineering College, Thrissur, friends and family for the
successful completion of the study.
REFERENCES
Cox, M.E., Johnstone, R., & Robinson, J. 2006. Relationships between perceived coastal waterway con-
dition and social aspects of quality of life. Ecology and Society 11(1): art35.
Gascon, M., Triguero-Mas, M., Martínez, D., Dadvand, P., Forns, J., Plasència, A., et al. 2015. Mental
health benefits of long-term exposure to residential green and blue spaces: A systematic review. Inter-
national Journal of Environmental Research and Public Health 12(4): 4354.
Historic Alleys, Historic Musings from a Malabar Perspective. 2017. Retrieved from: http://historical-
leys.blogspot.in/2017/07/conolly-and-calicut-canal.html.
Kondolf, G.M., & Pinto, P.J. 2016. The social connectivity of urban rivers. Geomorphology 277: 182–196.
Kumar, P. 2010. The economics of ecosystems and biodiversity: Ecological and economic foundations.
La Rosa, Daniele. 2014. Accessibility to greenspaces: GIS based indicators for sustainable planning in a
dense urban context. Ecological Indicators 42: 122–134.
Melissa Haeffnera, Douglas Jackson-Smith, Martin Buchert, Jordan Risley. 2017. Accessing blue
spaces: Social and geographic factors structuring familiarity with, use of, and appreciation of urban
waterways. Landscape and Urban Planning 167: 136–146.
USGS (2015). National hydrography dataset. Retrieved from: http://Nhd.usgs.gov.
Völker, S., & Kistemann, T. 2011. The impact of blue space on human health and wellbeing—Salutoge-
netic health effects of inland surface waters: A review. International Journal of Hygiene and Environ-
mental Health 214(6): 449–460.
Wendel, H.E.W., Downs, J.A., & Mihelcic, J.R. 2011. Assessing equitable access to urban green space:
The role of engineered water infrastructure. Environmental Science & Technology 45(16): 6728–6734.
978
V.P. Shalimol
Urban Planning, Government Engineering College, Thrissur, Kerala, India
K.M. Sujith
School of Architecture, Government Engineering College, Thrissur, Kerala, India
ABSTRACT: The sensible consumption of energy plays a major role in providing sustainable
development and this responsibility comes through good governance and practice. Governance
has been known to India from past millennia through Kautilya. In the present world, account-
ability, transparency, inclusiveness, equitability, etc. are the key ingredients of good govern-
ance. Thus, energy efficiency, sustainability and governance are interconnected. These concepts
have brought together carbon emissions, climate change, adaptation and mitigation, as well as
employment and poverty reduction. The concept of energy efficiency interlinks these thoughts.
It involves legislative frameworks, funding mechanisms and institutional arrangements, which
go together to support the implementation of Energy Efficiency (EE) strategies, policies and
programs. The government, EE stakeholders and the private sector should work together to
achieve this. However, India’s population makes up 18% of the world’s population, and its
energy consumption is 6% of the world’s primary energy use, which makes it one-third of the
global average. Energy consumption is always on the rise. Energy efficiency and its influence on
the governance sector is analyzed through this paper, which includes the laws and decrees, strat-
egies and action plans, funding mechanisms, implementing agencies, internal assistance, etc.
1 INTRODUCTION
The dimensions concerned with governance include the possession of puissance, the com-
petency to make decisions, how the people’s voices are perceived aurally and how accounts
are rendered. The qualities of good governance were long ago expounded by Kautilya in his
treatise Arthashastra as follows: “In the jubilance of his subject lies his jubilance, in their
welfare his welfare, whatever please himself he shall not consider good”.
The twelfth five year plan (2012–2017) defines good governance as an essential aspect in
order for society to be well functioning. It provides legitimacy to the system by providing
citizens with a way of effectively using resources and by the deliverance of services. The key
ingredients of good governance include accountability, transparency, inclusiveness, equita-
bility, sustainable development, etc. Good governance has always played a critical role in
advancing sustainable development. Thus, good governance and sustainability are the two
faces of a coin. For the sustainable emancipation of institutions and countries, energy effi-
ciency undertakes major functions. Thus, energy efficiency, sustainability and governance
can be viewed in line with carbon reduction, climate change, adaptation and mitigation, as
well as employment and poverty reduction. The way by which energy efficiency can be incor-
porated with sustainable development is by energy efficiency governance.
The International Energy Agency (IEA), with financial support from the European Bank for
Reconstruction and Development (EBRD) and the Inter-American Development Bank (IDB),
conducted a study on energy efficiency governance. Energy Efficiency (EE) governance includes
979
Russia
Code
• Thermal performance of buildings
Labels
• Energy efficiency class of multifamily buildings
• Green standards (2010)
Canada
Code
• Alberta building code (2011)
Labels
• BOMA Best (Building Environmental Standards) Version 2
• ENERGY STAR Portfolio Manager Benchmarking Tool
• LEED Canada (2009)
• LEED Canada (Existing Building: Operations & Maintenance)
Incentives
• EcoENERGY Retrofit (2007)
980
Technology development
This includes the development and demonstration of EE technologies.
Funding remediation
The introduction of revolving funds for EE investments, project preparation facilities and
contingent financing facilities come under this heading.
India is home to 18% of the world’s population and its primary energy use is 6% of the world’s
consumption, therefore the energy per capita consumption is only one-third of the global aver-
age. India has been responsible for virtually 10% of the incrementation in global energy demand
since 2000. Its energy demand in this period has virtually doubled, pushing the country’s share
of global demand up to 5.7% in 2013 from 4.4% at the commencement of the century. As the
country is progressing, with rising incomes and a better quality of life, there will be a greater
demand for energy. Coal now accounts for 44% of the primary energy mix. Oil consumption in
2014 stood at 3.8 million barrels per day (mb/d), 40% of which is utilized in the transportation
sector. Demand for diesel has been particularly strong, now accounting for some 70% of road
transport fuel use. This is due to the high quota of road freight traffic, which tends to be diesel-
powered, in the total usage and additionally to regime subsidies that kept the price of diesel
relatively low (this diesel subsidy was removed at the cessation of 2014; gasoline prices were
deregulated in 2010). On both the supply and demand sides, India is trying to meet its demand.
LPG use has increased rapidly since 2000, reaching over 0.5 mb/d in 2013 (LPG is second only
981
Solarization of CIAL
The use of solar energy at airports has developed gradually. Airports experimented with
installations that provided a few hundred kilowatts of peak power at the beginning of this
century. Nowadays, two, five or ten megawatt installations are not uncommon and the eco-
nomics are much improved as grid parity is approached.
Cochin International Airport (CIAL), serving the city of Kochi in the Indian state of
Kerala, is the busiest and largest airport in the state and the fourth busiest in the country. The
airport serves more than five million people annually.
The CIAL (Cochin International Airport Limited) set up a 100 KW PV predicated solar
power plant on the rooftop of the advent block as a pilot project during March 2013. It
required 400 panels, each with a capacity of 250 Wp. The installation was designed and
executed by Kolkata based Vikram Solar Power. This facility engendered 400 units of power
annually, the absence of battery backup reducing the capital cost drastically.
The next logical step was to go for full generation of the power required for the entire
operation of CIAL, integrating up to around 48,000 kilowatt hours per day from a PV predi-
cated solar system in its own backyard. To make the airport grid neutral, the capacity of the
system was to be about 12 MWp, with a capacity to generate 50,000 kilowatt hours. This was
done with permission from KSEB (Kerala State Electricity Board) to bank the electricity.
Bosch Limited was endowed with this esteemed activity through a transparent tender proc-
ess. They did it with élan and with the immaculate specialized flawlessness that Germans
are acclaimed for. The total project cost about `62 Cr. at about `5.17 Cr/MW, which is substan-
tially less than the benchmark set by the regulator. The project payback period is under six years.
M/s Bosch handles the system maintenance as well, on a contract, at a cost of `50 Lac per annum.
3.9 Evaluation
For effective policy-making, compliance evaluation is critical. The importance of evaluation
in policy-making has been well established through the examples of Denmark and Sweden.
Denmark’s energy efficiency program evaluation has made a concerted effort in developing
policy and long-term strategy. Compliance evaluation can assist the Indian policy makers
with identifying potential issues in the execution of the Energy Conservation Building Code
(ECBC) and help them to make necessary changes. Compliance evaluation will likewise enable
India to accomplish its proposed energy savings and emission reductions through the ECBC.
985
This paper has two sections, with the first one being about detailing the concept of energy
efficiency governance in a global aspect. Energy efficiency governance is a new concept that
has evolved linking energy efficiency with governance, which will mainly focus on sustainabil-
ity. For any country to be energy efficient it should implement policies and programs under
the umbrella of the government through governance. India also undertook many initiatives in
energy efficiency indirectly, under the head of energy efficiency governance. This paper intends
to give the energy efficiency framework of the Indian energy efficiency governance. India exhib-
its good practices in public–private participation, international assistance, government co-ordi-
nation, stakeholder engagement, etc., but lacks efficiency in managing the energy sources. India
adopts transparent and accountable systems. The overall institutional governance is weak in
energy efficiency. Separate local level energy efficiency policies have to be created to allow the
actions to begin from the bottom level of governance. There is no constitutional support for
beginning the energy planning at the local level, which will impart energy efficiency. The energy
resourcing and the evaluation of energy efficiency is found to be inefficient in the Indian con-
text. In order to qualitatively analyze the efficiency, not only at the building level but at the area
level also needed so as to confirm to the energy demand and its resource requirements.
ACKNOWLEDGMENT
REFERENCES
ACEEE (American Council for an Energy Efficient Economy). (2010). America. State Energy Effi-
ciency Scorecard for 2010, October.
BEE (Bureau of Energy Efficiency). (2010). National mission for enhanced energy efficiency—mission
document: Implementation framework, Ministry of Power, Government of India. agency, I. e. (2015).
India energy outlook. France: Directorate of global energy economics.
FACT SHEET: The United States and India—Moving Forward Together on Climate Change, Clean
Energy, Energy Security, and the Environment (2012, September 22). Retrieved January 12, 5, from
https://in.usembassy.gov/fact-sheet-united-states-india-moving-forward-together-climate-change-
clean-energy-energy-security-environment/.
International Energy Agency. (2010). Hand book of energy efficiency governance. Australia.
International Solar Alliance: Indias brainchild to become a legal entity on Dec 6. (2017, November 14).
Retrieved March 04, 2018, from https://economictimes.indiatimes.com/news/environment/develop-
mental-issues/international-solar-alliance-indias-brainchild-to-become-a-legal-entity-on-dec-6/arti-
cleshow/61647360.cms?utm_source=contetofinterest&utm_medium=text&utm_campaign=cppst.
ISA mission. (2016, August 2). Retrieved January 5, 2018, from http://isolaralliance.org/.
Ministry of New and Renewable Energy (GOI), Ministry of Power (GOI), USAID. (2016). Partnership
to Advance Clean Energy-Deployment (PACE-D) Technical Assistance Program.
Mohan, B. & George, F.P. (2016). Airport solarization: CIAL steals the thunder.
Niti Aayog to rank states on energy efficiency. (2017, February 22). Retrieved March 04, 2018, from
http://economictimes.indiatimes.com/industry/energy/power/niti-aayog-to-rank-states-on-energy-
efficiency/articleshow/57301449.cms.
Rao, S.L. (2012). Coordination in energy sector and its regulation in India. Institute for social and eco-
nomic change. 113–120.
Reddy, B.S. (2014). Measuring and evaluating energy security and sustainability: A case study of India.
Statistics. (2017, December 5). Retrieved January 5, 2018, from https://www.iea.org/publications/.
Together on Climate Change, Clean Energy, Energy Security, and the Environment. (2012, September
22). Retrieved January 12, 5, from https://in.usembassy.gov/fact-sheet-united-states-india-moving-
forward-together-climate-change-clean-energy-energy-security-environment/.
Yu, S., Evans, M. & Delgado, A. (2014). Building energy efficiency in India: Compliance evaluation of
energy conservation building code. U.S Department of Energy.
986
Nahlah Basheer
Masters of Urban Planning, Government Engineering College, Thrissur, Kerala, India
C.A. Bindu
School of Architecture, Government Engineering College, Thrissur, Kerala, India
ABSTRACT: Natural resources and the extent of their management are important for any
region, particularly to meet the demand for resources in these times of change. There should
be a ground for effective planning and management of these resources. They have environ-
mental, ecological, socio-cultural and economic roles to play among many others. The need
for an integrated development plan for Valapad, a coastal gramapanchayat of Thrissur dis-
trict in Kerala, India highlighted a requirement for a detailed on-site study of the environ-
mental sector of the area. The focus was on developing effective water management and
green infrastructure simultaneously. The changes that have taken place in the land use due
to the changes in land utilization, levels of encroachment, low public awareness, destruction
of flora and fauna, for example have contributed toward diverse effects that are irreversible.
This paper focuses on the amount of environmentally sensitive areas present in the region,
the existing scenario and the strategies that can be adopted for the management of the same.
1 INTRODUCTION
Long term sustainability of any urban or semi urban area is often related to that of economic
growth. But in reality it is crucially dependent on the social, economic and environmental
dimensions of the environmental sector. A multidimensional process like planning and devel-
opment requires an in-depth probe into matters of environment and protection. The case of
Valapad, a coastal gramapanchayat in the Thrissur district of the south Indian state of Kerala,
studied for the formulation of an integrated development plan, is explored in the paper.
2 REGIONAL SETTING
Valapad gramapanchayat lies in the Manappuram region of the west coast in Thrissur,
which has a separate island-like formation. This seaboard tract extends from Chettuva in the
north, to Azhikode (Munambam) in the south. The total length of the belt is approximately
56 km and the total width varies from four to eight km. The water bodies around the belt
are Karuvannur River, Canoli Canal and the Arabian Sea. The Karuvannur River flows
encircling this formation in the northern side, joined by the Canoli Canal from almost the
mid portion, flowing southwards. When it reaches the Munambam region, it is flushed by the
adjoining waters of the Periyar River. On the western side of the land is the Arabian Sea. It
is this island-like encapsulated land that we call the Manappuram region. Valapad is situated
toward the middle of the Manappuram region. The approximate width observed is 5 km.
Valapad followed a ridge and valley terrain in topography where valley areas were drained
throughout by natural streams (known locally as thödu). Some of these were fit for travel.
Other valley areas were wetlands and cultivable paddy fields. The depressions were filled with
987
3.1 Pond
There are numerous ponds in the gramapanchayat area and it is therefore a complete natu-
ral ecosystem. In earlier times, almost all households had one or more ponds, of which one
source was for bathing and the other for drinking. Most of the ponds of Valapad are mapped
in Figure 2, with Muriyathoodukulam, Kothakulam, Ambalakkulam and, Thirunellikkulam
being the important ponds in the area. Although there are numerous ponds here, most of
these are not in good condition.
Figure 1 details a few ponds by their nature of ownership (for example private, public and
temple ponds), their usage, features and pollution rates. It should be noted that the com-
monly observed threats to ponds here are pollution, land use changes, climate change, inef-
ficient water management techniques, improper managements of pond water, intensive use
for irrigation, fish overstocking, and degraded buffers.
988
Figure 4. Some sacred groves of Valapad gramapanchayat (a) Thekkiniyedath Naagakkaavu
(b) Arayamparambil Kaavu (c) Paarekkaatt Kaavu (d) Cheeramkaattil Kaavu.
well maintained ones. All the others are cleared of the lush vegetation and reduced to a single
platform for worship. The few notable ones are shown in Figure 4. Of these, the Adipparam-
bil Kaavu (Nagayakhi—Sarpakkaavu) of the Arayamparambil family is the richest with more
than 100 varieties of medicinal plants and other flora in the thirty cent premises. According to
the studies conducted by Jincy, T.S. and Subin, M.P. (2015), the area of this sacred grove has
been reduced to thirty cents from eighty-four cents in 1998, following land partition.
3.4 Streams
The 3,880 m long Pannatthödu, the 5,320 m long Paalamthödu, the 7640 m long beach thödu,
the 1,200 m long Netkot thödu, and the 9620 m long Arappathodu flowing through wards are
the most important streams in Valapad. However, there are other multiple feeder and connec-
tor streams to each of these main streams. Streams are numerous but with respect to the flow
of water and connectivity, the case exhibits a rather degrading scenario. Neglect and unre-
stricted waste dumping has resulted in blockage. Restricted water flow during the monsoons
lead to flooding of streams and as a result, the people resort to unscientific water clearances.
Known as the breaking of Arappa (estuary), the rainwater collected and flooding the lands,
is allowed to flow directly into the sea, thereby, giving no chance for groundwater retention.
3.5 Mangroves
Thrissur district consists of very low numbers of mangrove in the state. Presently, mangroves
are confined to the backwaters of Chettuwai, Azhikkodu, Kodungallur and few patches in
Venkidang and Pavaratty Panchayats. Valapad is abundant with an ecosystem favorable for
the growth of mangroves. Mangroves are present along the Kothakulam Arappa area, bor-
dering the Kothakulam beach ward, located approximately 500 m away from the sea. This is
989
one of the most picturesque spots in the panchayat. However, there are no buffers against the
mangroves and sooner or later, it may suffer from encroachment as elsewhere.
3.7 Biodiversity
Being a coastal panchayat, Valapad has the potential to serve a large amount of aquatic and
terrestrial biodiversity. The changes in the land use and encroachment over the protected
areas can be attributed to the depleting wilderness and ecologically sensitive areas. These
lead to endangered plant and animal life. There is also reduction in the amount of medicinal
plants that were once easily and largely available in the panchayat.
990
In Valapad gramapanchayat, the environment related issues of water shortage, pollution, and
waste management are also linked with the shortage of physical infrastructure. The problems
of waste management are acute and this has in turn led the people to resort to means of
burning plastic wastes and also dumping them into the water bodies. This pollutes the waters
and also disrupts the connectivity of flowing water in the streams. Pollution in the form of
air and noise along the NH at peak traffic hours is one of the negative effects of traffic and
transportation on the environment sector.
Beach and beachfront ambience is the main attraction of Valapad for tourism. Incorpo-
rating the principles of the environment and ecology to suit nature tourism, responsible eco-
tourism projects can take up momentum in the area. Integrated tourism planning is essential
for this and such initiatives can help in the sustainable development of the area.
The strategies pertaining to the protection, conservation and maintenance of the natural
resources that shall be adopted in Valapad, includes those related to ponds, streams, sacred
groves, mangroves, flora and fauna, groundwater recharge, waste management and so on, for
a better living environment. In the case of Valapad, bringing reasonable participation of the
stakeholders, through early and effective consultations by all partners is crucial to frame a
partnership. The actions required for this effective management shall be further exercised on
three levels, as shown below in Figure 9.
Ponds: Though Valapad is known to have thousands of ponds within the gramapanchayat,
the area lacks an action plan to maintain these assets. Table 2 below gives relevant strategies
that can be adopted for effective maintenance of ponds and the lifeline of water retention in
the panchayat.
Streams: Table 3 below gives relevant strategies that can be adopted for effective mainte-
nance of streams here. For any action, the existing conditions of water channels and their
connectivity need to be studied and mapped.
991
Levels Strategies
1 Regular cleaning (especially of the ponds used for the holy dip of Aaraattpooram, like
Kothakulam, and Sethukulam).
2 Resource mapping of existing unfilled ponds. Curb on unscientific landfilling of
waterbody. Introduce functional interfaces along public ponds in a regulated manner.
3 Device a maintenance partnership plan through efforts from beneficiaries (private owners)
and panchayat at a reasonable share.
Levels Strategies
Sacred Groves: The present conditions throw light on the fact that sacred groves of Valapad
have attained religious recognition but no environmental recognition. As such, due to strong
religious beliefs surrounding the same, private owners put efforts into their conservation.
Only as it moves forward to attain the level of an environmentally sensitive spot, will the asset
become a social responsibility.
Mangroves: Mangroves are one group of natural elements which aid in many ways against
environmental and natural hazards. Table 5 gives relevant strategies that can be adopted for
their conservation in Valapad gramapachayat.
Medicinal Plants: Table 6 below lists relevant strategies that can be adopted for the revival
of medicinal plants that once adorned the panchayat area.
Water Shortage: The integration of projects like Mazhapolima with a cost managed like
other sample areas of Guruvayur Municipality, Engandiyoor Panchayat and others. A
plan such that 25–30% of the project cost (for an individual household) be provided by the
beneficiary contribution shall be proposed. The beneficiary list once submitted to the
992
Levels Strategies
1 Promote afforestation of medicinal plants and trees inside the groves for better
environmental stability.
Manage a data bank of existing well managed sacred groves in the gramapanchayat by
laying them out into categories of size, density of vegetation and expanse.
2 General awareness about the social and environmental need for conservation (apart from
the religious concepts) and existing assistance schemes.
Spell out protection and conservation projects in the panchayat by providing funding
from the local Panchayat funding. There ought to be protection using buffers (natural/
artificial) according to the data gathered.
3 Devise a conservation plan for the principle of partnership with efforts from beneficiaries
(private owners), local communities around the nearest vicinity and local authority
(panchayat) at a reasonable share.
Levels Strategies
1 Boost the existing and initiate planting of new saplings in the Kothakulam Arappa area, at
the same time, protect the already planted sapling areas in Palappetty.
2 Identify feasible zones for the planting of mangroves
3 Devise a maintenance partnership plan through efforts from beneficiaries (private
owners) and panchayat at a reasonable share
Levels Strategies
1 Through clearing large sites of neglect in each locality, there shall be enough space for
planting.
2 Identify suitable areas of potential for a botanical medicinal plant garden.
3 MGNREGA workers and youth clubs can work together to site setting. Devise a manage-
ment plan involving the women folk of Kudumbashree, environment clubs of schools,
Government Ayurveda Hospital Valapad and the many practitioners of naturopathic
therapy.
gramapanchayat; a technical team of the project can provide the necessary support for the
installation of the open well recharge units.
For initiating the project, the panchayat area can be broken down into clusters and these
clusters can be selected based on land use or population density. For example, if on the basis
of land use, clusters can be as in Figure 10 below, namely cluster 1 (residential), cluster 2 (resi-
dential, commercial/industrial) and cluster 3 (residential, commercial/industrial, public/semi-
public use) etc. It shall be seen that in cluster 1 itself, there are three different zones in varying
scales. In stage 1, a small set of houses can be selected for initiating the project. In the next
three months, the stage 2 zone can be integrated with the already existing trial area. Finally, the
final set of the stage 3 zone can be added within the next five months to complete a complete
zone. The need to expand the zones one by one is very important in this project because from
previous successful projects we know that, the success rates of groundwater recharging can
only be achieved from collective participation and involvement from a wide area. Within a
span of two years, the whole panchayat area shall be successfully integrated under the project.
Waste Management: To be commenced at the household levels. With most households with
ample land parcels being able to manage their own food wastes and replacing plastic with
reusable/biodegradable materials wherever possible can be linked to better results. Also, scope for
the use of modern technologies such as those in bailing machines, will exist in the near future.
993
6 CONCLUSION
The cases of land and water pollution are acute in Valapad, due to the improper and unsci-
entific waste disposal techniques adopted. However, there are instances of certain public
ponds being cleaned by schedule under the MGNREGA works. But at the household levels,
most ponds are used for waste dumping and left for filling. In terms of disaster susceptibility,
the coastal areas along the Kothakulam and Nattika Arappa (estuary) portions are prone to
storm surges and accretion. In a time of reduced rainfall and drought, this is a very serious
problem thet requires higher regard and concern. The existence of a strong blue-green net-
work is a backbone for the development of any area. The management and protection of the
same through effective planning strategies and policies will help strengthen not only the life
of these networks, but also of the people of Valapad, especially because of the multiple ben-
efits these elements shall serve. This is indeed a vision toward an ultimate living environment
for the present as well as generations to come.
REFERENCES
994
ABSTRACT: Urban food insecurity is a major challenge associated with the phenomenon
of global urbanization. Several such multi-scale socioecological challenges have necessitated
the re-emergence of the concept of urban metabolism, which essentially deals with the flow
of energy and materials into and out of the city. In cities, the production and processing of
food, though being away from the consumers, its consumption and disposal are still with
them, which shows there is an obvious rift in the urban metabolism in terms of the food flow.
The authors, with the help of the emergy model establish a strong link between urban food
systems and urban land use pattern, highlighting the importance of sustainable urban plan-
ning that is also sustainable from social, economic and ecological perspectives. This model
envisages a city that produces its own food and nutrition as well as distributes, consumes and
disposes of it by virtue of the efficient systems generated by the urban resources.
Keywords: urban metabolism, emergy model, emergy analysis, urban agriculture, closed-
loop food system, urban food security, urban land resources, urban land use plan, urban
planning
1 INTRODUCTION
According to UN reports on world urbanization prospects, around 66% of the world’s popu-
lation will live in cities by the year 2050. The majority of this urban growth will take place
in low and middle income countries. The projected urbanization shows an alarming rate
and scale that can raise slum populations to 2 billion people (United Nations Report, 2014).
Urban planning, design and governance are becoming centrally critical for human survival.
The statistical evidence is in general pointing to the fact that humans as socialized animals
are inhabiting smaller land parcels where the whole of the existing urban areas is only around
1–3% of the land area of the earth. The non-convertible wastes generated here are also solely
by humans. All human made systems inadvertently over-depend or exploit nature, consider-
ing it as the inexhaustible vessel to meet their material needs.
The urban data also mentions another unpleasant fact that many of the city-dwellers, par-
ticularly those living in slums, still suffer from malnutrition (Amerinjan, 2017). Urban dwell-
ers, especially urban poor are less privileged in terms of the accessibility to healthy nutritious
food than rural people. This can be attributed to many factors, including the fast pace of urban
life and to the skewed socioeconomic bias of the urban land use pattern. It is considered that
the economy of a city depends mostly on the income generated by the business, commercial
and industrial activities taking place in its built-up land. Hence incorporating or retaining
green areas in the urbanized areas has less priority and consideration. As production of food
is possible primarily with agricultural land being made available along with labor, skill and
infrastructure facilities, urban areas by default become consumers of imported rural food
and highly processed agricultural products. In general, the high energy transportation cost
of food products from outside, which in many cases is another continent, drastically reduces
995
2 FOOD SECURITY
Definition of food security has changed over the years. It was defined in the 1974 World Food
Summit as the ‘availability at all times of adequate world food supplies of basic foodstuffs
to sustain a steady expansion of food consumption and to offset fluctuations in production
and prices’ (FAO, 2003). In 1983, Food and Agricultural Organization of the United Nations
(FAO) defined it as ‘ensuring that all people at all times have both physical and economic
access to the basic food that they need’ and thus expanded the concept to include access by
vulnerable communities or people to available supplies, implying that the demand and sup-
ply side of the food security equation should be balanced. The 1994 United Nations Devel-
opment Program (UNDP) Human Development Report added a number of component
aspects to human security of which food security was only one. This concept, that related
food to the human rights, helped start discussions about food security. The 1996 World Food
Summit adopted a still more complex definition: ‘Food security, at the individual, household,
national, regional and global levels [is achieved] when all people, at all times, have physical
and economic access to sufficient, safe and nutritious food to meet their dietary needs and
food preferences for an active and healthy life’ (FAO, 2003).
The State of Food Insecurity refined the definition in 2001 as: ‘Food security [is] a situ-
ation that exists when all people, at all times, have physical, social and economic access to
sufficient, safe and nutritious food that meets their dietary needs and food preferences for an
active and healthy life’ (FAO, 2003).
Hence the accepted definition of food security can be expressed as: food security exists
when all people, at all times, have physical, social and economic access to sufficient, safe and
nutritious food that meets their dietary needs and food preferences for an active and healthy
life. Household food security is the application of this concept to the family level, with indi-
viduals within households as the focus of concern, and ‘Food insecurity exists when people
do not have adequate physical, social or economic access to food as defined above’ (FAO,
2003). In the state of Kerala, food insecurity has become a huge challenge because of the
toxic content and questionable safety of the food imported from the neighboring states. Of
late, Kerala has witnessed a change in the vegetable production in its urban areas intended for
local consumption rather than relying on the food imports from distant places.
3 URBAN INDIA
According to the 2011 census, 377 millions of Indian people live in urban areas compared to
286 millions in 2001. For the first time since independence, the absolute increase in popula-
tion is more in urban areas that in rural areas (Census India, 2011). A study of Indian cities
by McKinsey Global Institute, reveals that the current performance of Indian cities is poor
across key indicators of quality of life, such as water supply quantity, public transportation,
996
Figure 2.
*United Nations Department of Economic and Social Affairs.
Source: *UN DESA Rome, 2010.
parks and open spaces. On current trends the quality of urban services is expected to fall
further by 2030 (McKinsey, 2010) with reasons obvious even today.
The mentioned scale and complexity of urbanization in India demands a comprehensive
strategy required in addressing the urban challenges, especially that of food security. Inten-
sive data preparation is necessary to devise strategies for urban resource allocations and the
use of accurate metrics is required to ensure the effective execution of urban strategies.
Town or urban planning in India is still in its infancy. While an urban master plan governs
the growth of the urban areas, a Town Planning Scheme or TP Scheme is also widely adopted
in various states. Land use planning, the most important feature of all development plans
dealing with spatial planning, is very critical for all development purposes since it involves
assigning the particular activity to a given parcel of land based on the concept of optimal
land rent where the very definition of optimal use is limited to economics. The authors are of
the opinion that there has to be another dimension to it, by adding the component of solar
energy falling on the urban land surface for the production of food from within, which is
important for sustainable development.
Zoning and land use planning help to ensure dedicated land for segregated and planned
activities that have positive impacts on the economy and to avoid conflict in activities that
affect the quality of life in those areas. Unfortunately many of the fast urbanizing, middle
997
5 URBAN METABOLISM
A city can be better understood if compared with a complex organism with various meta-
bolic processes. Howard T. Odum, an American ecologist, proposed the conceptual model
of metabolism (Odum, 1996) where materials from within the city or from outside getting
transformed by a series of urban activities, and finally get converted to waste and then
released into the environment. Urban metabolism is composed of built-up land, farmland,
and unused land (Odum, 1996).
The inner environment of cities by itself cannot support all the metabolic activities; mate-
rials, and energy from outside is also needed for it. Additional mechanisms are required to
expel the wastes into the environment. Hence urban metabolism is understood and expressed
in terms of production, consumption, and processing of urban internal resources completed
with waste disposal, as well as the flow of material and energy between the internal and exter-
nal environment (Odum, 1996).
Urban Food security depends on the design and resilience of the urban food system, which
in turn depends on the urban metabolism. Metabolic rifts are caused due to the linear, inter-
rupted food loop systems, which happens due to the exclusion of food from the urban policy
and planning agenda for a long time.
The urban food system has a dynamic structure that exists between land, population, food
distribution and production processes, resources, technology, economy and employment
(Armendáriz et al, 2016).
A resilient urban food system addresses all four levels of the food system: food production,
processing, distribution and consumption, (Amerinjan, 2017) which implies a closed-loop food
system. In the cities, policies can be established to incentivize the local production of healthier
food options and limit unhealthy food imports (Amerinjan, 2017). The Edo period in Japan
could be a good model for this. Recycling and living with ‘just enough’ were made part of pub-
lic policy when Edo was facing an imminent environmental crisis (Brown, 2010). Almost every
material in Edo was made to recycle and none went to waste. The need to recycle was reflected
in technologies and practices and the way things were made in the first place. This could argu-
ably be the predecessor of the circular economy that we are trying to engineer today. Similarly,
we need a new paradigm within the urban planning that will help ensure food security and
thereby sustainability in the cities. Metric-based urban planning is required where the urban
metabolic density can be evaluated and allocation of land resources can be done accordingly.
Emergy is defined as the total amount of available energy (or exergy) of one kind that is used up
directly or indirectly in a process to deliver an output, product, flow, or service (Odum, 1996).
Emergy analysis is a method to measure the value or quantity of material energy that is used
to transform the different (2 or uniform measurement standard), by the use of specific conver-
sion factors and by combining socio economic with eco-environmental systems, to analyze the
flows and transformations of materials and energy quantitatively (Huang et al, 2015).
The formula is given as
Em = τEx
998
where Em is the emergy of one material or energy, Ex is the available joules of one material
or energy and τ is the emergy transformity constant of material energy, which is the solar
emjoules it needs to transform (Huang et al, 2015).
According to the emergy model, different forms of energy, materials, human labor and
economic services are all evaluated on a common basis (the environmental support provided
by the biosphere) by converting them into equivalents of only one form of available energy,
the solar kind, expressed as solar equivalent Joule (seJ).
The concept of “available energy” allows the analyst to account for all kinds of resources
used (minerals, water, organic matter), not only energy carriers (Huang et al, 2015).
A typical material flow study can be seen from the figure. Food materials can be filtered
from it and studied separately. Using the emergy database, the solar transformity value can be
found for each material, and that can be multiplied with the available amount of the material
to find its emergy. Various materials and energy are categorized into different groups for ease
and standardization in calculation, such as renewable, non-renewable, industrial and labor.
The solar transformity of some materials are as given below.
1. Sunlight = 1
2. Agricultural production = 1.43 × 105
3. Livestock production = 9.15 × 105
4. Fisheries production = 3.36 × 106
In this way, the emergy-based evaluation of urban land can also be done. By calculating
the correlation coefficient of the increments in various urban lands, such as built-up land,
its impact on the total metabolic density of the city can be found out. If a city’s metabolic
activities depend on the activities in the built-up land as per land use, it can be concluded
that the changes in the activities on the built-up land will affect the urban development more
significantly than the other lands. Considering the environmental load of the built-up land,
planners can insist on more farmlands/agricultural land to be included in the urban master
plan, to match the emergy value of the food materials coming in and required to be disposed
of in the urban area. To address the space shortage in cities, appropriate policies may be
designed and implemented to make sure that there are enough sun exposed roof spaces for
urban agriculture/farming. This will further necessitate the need for scientific disposal and
recycling of wastes that can in turn be used for the production of food items in the urban area
itself. Thus, urban food systems can be made a closed-loop system, fostering sustainability
and resilience and eventually, a food secure urban community.
8 CONCLUSION
This paper has conceptually explored emergy as a metrics parameter in urban resource
allocation to achieve urban food security. Emergy is useful in evaluating and relating the
999
REFERENCES
1000
P. Varghese
Karpagam University, Coimbatore, Tamil Nadu, India
ABSTRACT: Questions have been asked regarding the feasibility of smart-cities; it has
been asked whether the policies could be lopsided when seen alongside normally developing
urban processes, which adopt preconceptions or a uniform distribution of resources, or
next to traditional lifestyles. Some assumptions adopted by Glaeser (2011), as well as the
McKinsey report (2010), regarding urbanization are reviewed, especially within the Indian
context. This needs to be seen within a historical/traditional perspective and does not fully
require the smart-city development policy as a given. It is argued instead that providing urban
amenities in rural areas could also be an alternative or parallel option, for multiple reasons.
1 INTRODUCTION
1003
5.3 Alternatives—smart-villages
An alternative to such cities, smart, hi-tech or otherwise, is instead the possibility of smart-
villages. Home-grown solutions, which have been proposed by technologists who want to
see change from the ground up, from the grass-roots, indicate that this development should
happen not merely in selected areas or metropolises of the country, but all over. One such
case is that of PURA (Providing Urban Amenities to Rural Areas), proposed by A.P.J. Abdul
Kalam, an eminent space technologist, and also a past president of the country. He envisaged
that such technological developments should happen from the level of the villages themselves;
1008
6 CONCLUSION
The idea that urbanization is inevitable is only preparing the individual for an uncertain
future, especially those from rural backgrounds. Will the move to the city ensure that the indi-
vidual will be able to fight the odds to make a living? Is farming the land an outdated lifetime
concept or pursuit? Are we as a society preparing future generations for an inevitability that
no one knows? The answers are not clear.
Glaeser (2011) examines the city from a post-established perspective and rationalizes
its existence; the McKinsey (2010) perspective is neither a ground up viewpoint nor
multidimensional and proposes the smart-city as a 21st century solution to a socio-economic
inevitability. This approach is fallacious, especially within the Indian context. Other models
of development also need to be looked at concurrently. In the final analysis, smart-cities
could become a reality, but maybe for the wrong reasons—it need not develop for the reasons
mentioned. However, the investment in smart-villages could, it is rationalized, have a greater
benefit to the population. National policies ought to have a multidirectional perspective,
which will drive current thinking and actions for a more definite future.
REFERENCES
Chauvin, J.P., Glaeser, E.L., Ma, Y. & Tobio, K. (2016). What is different about urbanization in rich and
poor countries? Cities in Brazil, China, India and the United States (NBER Working Paper No. 22002).
Cambridge, MA.
Datta, A. (2015). New urban utopias of postcolonial India: ‘Entrepreneurial urbanization’ in Dholera
smart city, Gujarat. Leeds, UK: University of Leeds.
Detter, D. & Fölster, S. (2017). The public wealth of cities: How to unlock hidden assets to boost growth
and prosperity. Brookings Institution Press.
Feenan, R. et al. (2017). Decoding city performance: The universe of city indices 2017. Chicago: Jones
Lang Lasalle Ip, Inc. and The Business of Cities Ltd.
Glaeser, E.L. (2007). The economics approach to cities (Working Paper 13696).
Glaeser, E.L. (2011). The truimph of the city. London, UK: Macmillan.
Glaeser, E.L. (2013). A world of cities: The causes and consequences of urbanization in poorer countries
(NBER Working Paper No. 19745).
GoI. (2011). Census of India. [Online] Retrieved from http://www.censusindia.gov.in/2011census/
[Accessed 25 10 2017].
Jha, A., Sharma, S. & JLL-ASSOCHAM. (2015). Housing for all: Catalyst for development and inclusive
growth. New Delhi: JLL-ASSOCHAM.
Li, Y., Lin, Y. & Geertman, S. (2015). The development of smart cities in China. Cambridge, MA: MIT.
McKinsey Global Institute. (2010). India’s urban awakening: Building inclusive cities, sustaining economic
growth. New Delhi: McKinsey Global Instiute.
MoUD. (2017). http://smartcities.gov.in/content/ [Online] Retrieved from http://smartcities.gov.in/con-
tent/ [Accessed 25 October 2017].
Ravi, S., Tomer, A., Bhatia, A. & Kane, J. (2016). Building smart cities In India: Allahabad, Ajmer, and
Visakhapatnam. Brookings India.
UNFPA. (2007). State of world population. [Online] Retrieved from www.unfpa.org [Accessed 25 10
2017].
1010
Madhura Yadav
School of Architecture and Design, Manipal University, Jaipur, Rajasthan, India
1 INTRODUCTION
Environment management is not a new concept; the Vedic vision to live in harmony with
the environment was not merely physical but far wider and much more comprehensive. The
Vedic message is clear that the environment belongs to all living beings, so it needs protection
by all, for the welfare of all. The Mahabharata, Ramayana, Vedas, Upanishads, Bhagavad
Gita, Puranas and Smriti comprise the original messages for preservation of the environ-
ment and ecological balance. Nature or Earth, has never been considered a hostile element to
be conquered or dominated. In fact, man is forbidden from exploiting nature. He is taught
to live in harmony with nature and recognize that divinity prevails in all elements, including
plants and animals. The relation of human beings with the environment is very natural as he
cannot live without it. From the very beginning of creation he wants to know about it for self-
protection and benefit. Awareness of the philosophy of karma is fundamental to the working
of the universe. The law of karma (and Newton’s third law of motion) simply states that for
every action, there is an equal and opposite reaction, and that this is true in all parts of the
universe. If we understand this law to be universal and immutable, then we would never think
of harming the earth or its people.
With the ever increasing development by modern man, human activity has interfered with
the fragile and complex interrelationship of our holistic universe and damaged the ecologi-
cal system and natural processes. Humanity has gradually lost its sensitivity to the world
of vibration. We have become more concerned with material development and insensitive
toward nature. Nature is made up of five elements, those being space, air, fire, water, and
earth, which are the foundation of an interconnected web of life, interconnectedness of the
cosmos and the human body. The five great elements (space, air, fire, water, and earth) that
constitute the environment are all derived from Prakriti, the primal energy. Environment
study deals with the analysis of the processes in water, air, land, soil and organisms, which
leads to pollute or degrade the environment. Nature has maintained a status of balance
between and among these constituents or elements and living creatures. A disturbance in
1011
2 METHODOLOGY
Figure 1. Microstructure of water molecule from different geographic locations of the world.
1013
‘Thank you’ Love and ‘appreciation ‘You make me sick, I will kill you’
It is observed that the negative words ‘You make me sick, I will kill you’ resemble an image
of polluted water. Now we see the effect of thoughts and intent on the water. Below left is
an image of very polluted and toxic water from the Fujiwara Dam. Below right is the same
water after a Buddhist monk (Reverend Kato Hoki, chief priest of Jyuhouin Temple) offered
a prayer over it for one hour. Prayer, that is sound coupled with intention, seems to have an
extraordinary ability to restore the water back to its natural, harmonious, geometric symmetry.
A bowl containing mineral water was placed on the table in front of Dadi Janki, Adminis-
trative Head of Brahma Kumaris World Spiritual University.
1014
The Rural Development Wing (RDW) of the Rajyoga Education and Research Foundation
(RERF) is working in partnership with government institutions, NGOs and research insti-
tutes empower thousands of farmers by reconsidering thought-based technology (medita-
tion) combined with organic farming used in rural India for some years.
To face the challenges in agriculture, some farmers decided to undertake experiments on
their farms using Rajyoga Meditation practice in their daily activities.
Farmers using this technique reported considerable improvements in resistance to disease,
pests and adverse weather conditions. The meditation involves creating the awareness of
being the subtle conscious being rather than the physical body and then directing thought
energy (peace, love and power) from the Divine Source to the crops.
Now more than 400 farmers all over India who are practicing sustainable yogic and natural
farming techniques. In India, a number of Agriculture Universities and Researchers have
also taken up the research in order to measure and quantify the actual advantages.
Some preliminary findings have been noted as follows:
Germination rate of meditated seeds was at 93.33% in contrast with non-meditated seeds
which were at a rate of 86.67%. Faster growth rate of meditated seeds, which took six days
less to germinate.
Significant growth in friendly insects’ population
Micronutrient content of meditated crops of wheat showed higher amounts of iron,
increased oil content in oil seeds, improved protein and vitamins in vegetables thus increas-
ing the energy value.
Soil microbial population showed higher population of rhizobium, azotobacter and
azospirllum.
1015
Other factors, such as root length and seed weight, were greater in meditated samples.
A significant drop in the pest damage was noted. (Results extracted from preliminary report
published by SD Agriculture University).
Basic methodology for the application of positive and elevated conscious thought in farm-
ing, as recommended by the RERF. The seeds are placed in the BK center for up to a period
of one month before sowing. This is then followed by weekly meditations taking place in the
fields by groups of BK teachers and students throughout the crop growth cycle.
Over the last two years, Italy, Greece and South Africa are amongst some of countries
experimenting with these techniques.
This study reviews by Dr. Ndiritu, South Africa of sound based (acoustic frequency) and
thought-based (meditation) technology in improving crop production and assesses their
potential. Published experiences of five farmers report substantial improvements in yield
and resistance to disease, pests and drought. Published data reports yield improvements of
up to 32% and up to 146% in nutritional constituent concentration. The technique is being
promoted non-commercially by an NGO and thousands of farmers have been trained to use
it. The improvements in crop production from acoustic frequency and meditation techniques
are found to be comparable to those from biotechnology. Their potential in mitigating the
global food crisis is considered large. Research and community-based initiatives to promote
the two techniques are therefore recommended.
The experiment by Dr. Emmoto on water proved that with regard to the molecular structure
of water, our intent (thoughts), words, ideas and music have a profound healing or destructive
effect on the water. The human body is made up of 70% water. Ultimately it means that what
1016
AIR is everywhere and gives unconditionally to all living things. It does not withhold its
love from anyone. Instead of polluting air with negative thoughts, to give it the fragrance
of good wishes is to show respect. Ultimately, these reach all living thing through air.
EARTH is stable and nourishing. It gives our feet a place on which to stand. It nourishes
all living things. Just like the earth, our stage should be stable, so that we are not shaken by
anything. Just like the earth, our thoughts and actions should nourish both people and the
environment. On the physical level we have to observe cleanliness.
WATER is flexible and has the power to neutralize. An ocean is a symbol of unlimited
virtues, and like water we should be flexible in dealing with others. We can learn from the
Great Ocean and neutralize negativity. On a physical level even to let the tap drip is to
disregard the importance of water.
FIRE cleanses, purifies and transforms. Fire is a symbol of yoga; our yoga should be intense
like fire so that we are transformed by it. On a physical level to abuse fire is to be irreverent.
Fire rebels against our burning of trees, for example, by filling out air with smoke.
1017
REFERENCES
1018
M. Harisankar
Urban Planning, Government Engineering College, Thrissur, Kerala, India
C.A. Biju
School of Architecture, Government Engineering College, Thrissur, Kerala, India
ABSTRACT: The last few years have seen many countries focusing on sustainability in
their developmental agenda. In developed countries the cycling policy has in recent years
evolved from a peripheral matter into one to be considered as a priority, in line with the poli-
cies for other means of transport. Being a clean, green and healthier mode, bicycling holds
the key for sustainable urban mobility and better environmental qualities for our urban areas.
This paper concentrates on the need for urban planners to solve the urban mobility problems
with the introduction and integration of bicycle oriented transit facilities with the existing
transportation network in Indian cities. Planning interventions are very essential in providing
traffic calming and infrastructural support coupled with the appropriate policy backup, so as
to encourage a modal shift from motorized transport to bicycling, which is discussed in detail
in this paper citing some successful examples around the world.
1 INTRODUCTION
The rapid urban growth and increased use of motor vehicles that most countries have expe-
rienced in the recent years has created urban sprawl and higher demand for motorized travel,
leading to a range of environmental, economic and social consequences. This effect is much
pronounced in the case of developing countries. As a sustainable alternative, bicycles can
replace or reduce the usage of automobile transport so there is an urgent need to integrate
bicycles into the transportation system of our cities. This can enhance the mobility of people
in urban areas and can also safeguard the accessibility of our congested cities provided suit-
able facilities are adopted for a bicycle infrastructure. It provides freedom of movement to
rich and poor, young and old alike.
The National Urban Transport Policy (NUTP) has stressed the need for an approach in
transport planning that focusses on people and not vehicles. The united Nations Habitat
Global Report on human settlements also highlighted urban transport with a focus on reduc-
tion in pollution and congestion as a core area for advancing sustainable development in its
Five Year Action Agenda 2012–17 (Planning & Design for Sustainable Urban Mobility).
Indian cities have a latent demand for cycle and walking trips and a topography that suits
it well which can be exploited in the form of strategic planning and designing of suitable
infrastructural support for bicycles, which helps enhanced urban mobility to a great extent.
This study discusses the suitability of bsicycle as a sustainable mode for urban mobility in
urban areas, how far they can be integrated with the existing conditions, their planning and
design considerations and programs and techniques for promoting a bicycle infrastructure.
Policies for car oriented transport development have resulted in more and more road
construction that have clearly failed to cope with the ever increasing demand for rapid
motorization that has made our roads more and more congested.
1019
Figure 1. Vicious circle of car oriented transport development (Source: Torsten et al 2012).
Figure 2. Distance/ravel time ratio for different transport modes (Source: Torsten et al, 2012).
1020
of greenhouse gas emissions that contribute to climate change. Cycling provides an excellent
opportunity for individuals to incorporate physical activity into daily life. The human scale
urban environments that support cycling and discourage car use can improve social interac-
tions and increase community attachment, livability and amenity (Litman et al., 2009).
The medium and large cities have a typical bicycle modal share of 13–21% (Figure 3).
Cycle trips might be as low as 6–8% in mega cities, however the absolute numbers are about a
million bicycles. Most of the medium and large cities in India have about 56–72% trips, which
are short trips (less than 5 km in length), offering a huge potential for bicycle use.
Figure 6. Provision for bike racks in buses (Source: Pucher & Buhler).
4 NON-INFRASTRUCTURAL/SOFT MEASURES
Most of the countries give as much importance to soft measures as they give to infrastruc-
tural measures. Such programs concentrate on the positive sides of cycling. There will be
increased opportunities to promote cycling through sustainable transport schemes such as
“safe-routes-to-school”, “safe-routes-to-leisure-projects” and “bike-to-work-schemes”. Also
1022
The integration of cycling with public transportation helps cyclists cover trip distances that
are too long to be made by bike alone. Public transportation can also provide convenient
alternatives when cyclists encounter bad weather, difficult topography, gaps in the cycle net-
work and mechanical failures. There are four main approaches: provision of bike parking at
train stations and bus stops, bike racks on buses, permission and storage space to take bikes
on board trains, and the co-ordination of cycle route networks so that paths and lanes lead
to public transportation stops. There are many different kinds of bike parking, ranging from
simple bike racks on sidewalks near public transportation stops to advanced full service bike
stations. There are multi storey bike parking stations being increasingly used in Amsterdam
as an answer to the rising demand of bike parking at important transit stations. The serious
problem of bike theft in most countries has increased the demand for secure parking. In India
such secure bike parking stations need to be set up to increase the cycling levels in our cities.
The overcrowded public transportation system and difficulty in design modifications of trains
and buses makes taking bikes on board trains and buses increasingly challenging.
A public bicycle sharing system has a major role in solving the last mile problems in cities.
It enables the public to pick up the bicycle from any provided self-service docking station,
which can then be returned to the same or any other docking station. The concept of bike
sharing has also been recently introduced in India and few cities have already experimented,
including FreMo in Thane, Green Bike Cycle Rental and Feeder Scheme, Delhi Metro Cycle
Feeder Service in Delhi, Namma cycle in Bangaluru and Cycle Chalao in Mumbai.
Bicycles are an important means of transport in all urban areas in India. At present, most
of the residents in India depend upon non-motorized transport to meet their transportation
needs. The average share of cycles in medium cities of India varies between 3% and 7%. The
years after the industrial revolution showed a massive increase in the number of motor vehi-
cles accompanied with a dip in the average share of bicycles in the major cities of India in
the 1980s and 1990s. A large amount of utility cycling is still present in Indian cities as it is
Figure 7. Trends in bicycle modal share in 1980, 1990 and 2000 (Source: 20).
1023
Case studies are undertaken based on the method of provision of bicycling facilities in cities of
different sizes. The smaller cities have shorter trip distances, decreased levels of pollution and
less stressful traffic conditions, while large and mega cities with a larger geographical area and
greater trip distances make the provision of such facilities more challenging. Correspondingly,
the census classification of Indian cities is used to match the Indian conditions.
7 ANALYSIS
The major findings from the foreign case studies are compared with the existing condition in
India to examine how far they can be applied in Indian context. In India due to the higher
population and population density compared with the European case study cities, the census
classification is used to divide the cities in India based on size. So further analysis is based on this.
1024
8 STRATEGIES
Newer cities are likely to have more space for infrastructure, given the wider streets that are
common in these cities. But a lower density, single use development pattern means that des-
tinations are likely to be more dispersed. In such cities, a bicycle infrastructure may be used
more for recreation than transportation, especially if the city builds off-street bicycle paths.
For older cities, more compact, mixed-use development patterns keep destinations within the
cycleable distance but the city might not have enough space for incorporating a bicycle infra-
structure or have greater intensity of vehicular traffic to protect the cyclists. Distances may
make utilitarian cycling feasible in these cities, but creating a perception of cycling is challeng-
ing in these cities. The strategies are given for small and medium towns, and large and met-
ropolitan cities separately along with some common strategies. Although these classifications
1025
9 CONCLUSION
The extent to which the bicycling facilities can be integrated with the existing scenarios of the cit-
ies depends on the city size, its inherent qualities and the existing transportation facilities in the
cities. Indian cities truly reflect the large potential for bicycle use with a huge latent demand for
bicycle facilities and infrastructure. The mixed land use and poly nucleated city structures with
a higher percentage of short trips gives an added advantage to providing bicycle facilities while
the large percentage of informal housing, high population density and lack of space presents
unique challenges. At the same time, there is a great need to provide such facilities in Indian
scenarios. So, the planning for bicycle facilities cannot be carried out in isolation but in an inte-
grated approach, addressing the needs of all road users. It is a combination of various infrastruc-
tural measures coupled with policies and promotional support programs, backed up with strong
political will and community participation. Cycling offers a healthy, cost effective, equitable way
to improve the sustainability of urban transportation systems and build more livable cities.
REFERENCES
Belter, T., von Harten, M. & Sorof, S. (2012). Costs and benefits of cycling. European Union.
Cycling infrastructure design—how bicycling can save the economy, TSO publishing house.
McClintock, H. (2002). Planning for cycling. Cambridge, England: Woodhead Publishing in Environ-
ment Management.
Pucher, J. & Buehler, R. (2012), City cycling. Cambridge, Massachusetts: The MIT Press.
Rastogi, R. (2011). Promotion of non-motorized modes as a sustainable transportation option: Policy
and planning issues. Current Science, 100(9), 1,340–1,347.
Tiwary, G., Arora, A. & Himani Jain, H. (2008). Bicycling in Asia. TRIPP, IIT Delhi.
1026
ABSTRACT: The economic policies introduced by the government of India in 1991 played
a great role in shaping the cities of modern India. The flow of revenue from foreign countries
resulted in a major change in city structure. Availability of basic infrastructure was the major
determinant for the inflow of Foreign Direct Investment (FDI) which influenced the states to
improve the facilities. The Multi-National Corporation (MNC) investment region witnessed
large-scale development resulting in large demographic changes due to migration. Booming
land prices in the area forced growth in a vertical direction, which has impacted the city form.
The biggest challenge in FDI-induced urbanization is making the city inclusive. Developments
should be made with a vision for the future. A spatio-economic development policy wherein the
spatial structure dictates the growth of the economy by attracting the investments can dictate
the growth of the city by enhancing the positive and reducing the negative impacts of FDI.
1 INTRODUCTION
2 IMPACTS OF FDI
1027
1028
1029
1030
1031
1032
5.5 Farmland
Any development should address the ecological sensitivity of the region; productive farm-
land should not be acquired for the development process. The agricultural production and
livelihoods of many have been lost in the past with the setting up of SEZs and other devel-
opments. First preference in development should be given to unfertile land so that farming
activity is not lost, and the fertile land continues to be farmland. The past experience of
Nandigram in West Bengal points toward this aspect.
5.6 Security
Security issues are a major concern of these newly developed cities. In the case of Gurgaon,
the city faces lots of security issues as any nightlife is present only within the glass
buildings. The segregation and planning made the streets dead at night. This scenario,
along with the gated communities that act as another entity completely secluded from the
happenings outside their gates, increases security issues. Events outside these compounds
are unknown to their residents, making the streets unsafe at night. Introducing various
functions at the same place at various point of times of day is an option in tackling
this situation. Enhancement of the night life in the cities is an important aspect to be
considered.
1033
The spatial manifestations of FDI are so evident that it can dictate the way in which a city
develops. FDI-induced development can be seen all around the world wherever the policy of
FDI is enacted. The growth of Chinese cities is the best example in this regard.
The biggest challenge in FDI-induced urbanization is making the city inclusive. When the
Indian city, with its chaos and disorder, is converted into a global one or a new development
such as SEZ, the low-income groups are not given much consideration. FDI developments
bring a lot of employment opportunities especially the white-collar jobs. The job options for
the poor and the ones affected by land acquisition are limited to jobs such as security guards,
and gardeners, making the division in society more pronounced. When the city develops over
a certain limit, it becomes a place for the rich only. Even the middle class finds it difficult to
afford housing in the cities.
The major factor that should be considered is that the developments should be planned.
Developments should be made with a vision for the future. Traffic congestion in many cities,
when the city expands beyond borders, is due to the lack of vision in the early planning stage.
There is a need for a spatio-economic development policy whereby the spatial structure
dictates the growth of economy and the attraction of investment. Western countries have
started developing in this manner because the shortcomings of the older developments are
becoming more and more visible. Thus, to conclude, FDI has a large impact on the urban
form and structure with a number of positives and negatives. It is hoped that a proper spatio-
economic policy to guide the same can enhance the positives and reduce the negatives.
REFERENCES
Chadchan, J. & Shankar, R. (2009). Emerging urban development issues in the context of globalization.
Journal of ITPI (Institute of Town Planners India), 6(2), 78–85.
Chen, Y. (2009). Agglomeration and location of foreign direct investment: The case of China. China
Economic Review, 20(3), 549–557. doi:10.1016/j.chieco.2009.03.005.
Eldemery, I.M. (2009). Globalization challenges in architecture. Journal of Architectural and Planning
Research, 26(4), 343–354.
Mathur, O.P. (2005). Impact of globalization on cities and city-related policies in India. In H.W.
Richardson, C-H.C. Bae (Eds.), Globalization and urban development (Advances in spatial science).
Berlin, Germany: Springer-Verlag.
Mukherjee, A. (2011). Regional inequality in foreign direct investment flows to India: The problem and
the prospects. Reserve Bank of India Occasional Papers, 32(2), 99–127.
Shivam, N. & Keskar, Y.M. (2014). Impact of foreign direct investment over the city form, a case of
Hyderabad City, India. International Journal of Innovative Research in Science, Engineering and
Technology, 3(2), 9674–9682.
Wei, W. (2005). China and India: Any difference in their FDI performances? Journal of Asian Economics,
16(4), 719–736. doi:10.1016/j.asieco.2005.06.004.
Zhu, J. (2002). Industrial globalisation and its impact on Singapore’s industrial landscape. Habitat
International, 26(2), 177–190. doi:10.1016/S0197-3975(01)00042-X.
1034
Author index
1035
1037