Sajjad “JJ” Arshad, Ph.D.
San Jose, California, United States
5K followers
500+ connections
About
I hold a PhD in Information Assurance (Cybersecurity) from Northeastern University…
Activity
-
While developing XBOW over the past three months, we played around with using it for bug bounties and ended up at #11 in the US on HackerOne. Since…
While developing XBOW over the past three months, we played around with using it for bug bounties and ended up at #11 in the US on HackerOne. Since…
Liked by Sajjad “JJ” Arshad, Ph.D.
-
Explore insights from 1,800 IT leaders in our third annual Global Security Research Report: Cybersecurity at the Crossroads. Gain practical guidance…
Explore insights from 1,800 IT leaders in our third annual Global Security Research Report: Cybersecurity at the Crossroads. Gain practical guidance…
Liked by Sajjad “JJ” Arshad, Ph.D.
Experience
Education
Volunteer Experience
-
Treasurer
Iranian Student Association at Northeastern University
- 1 year
Social Services
Publications
-
Cached and Confused: Web Cache Deception in the Wild
USENIX Security Symposium
Web cache deception (WCD) is an attack proposed in 2017, where an attacker tricks a caching proxy into erroneously storing private information transmitted over the Internet and subsequently gains unauthorized access to that cached data. Due to the widespread use of web caches and, in particular, the use of massive networks of caching proxies deployed by content distribution network (CDN) providers as a critical component of the Internet, WCD puts a substantial population of Internet users at…
Web cache deception (WCD) is an attack proposed in 2017, where an attacker tricks a caching proxy into erroneously storing private information transmitted over the Internet and subsequently gains unauthorized access to that cached data. Due to the widespread use of web caches and, in particular, the use of massive networks of caching proxies deployed by content distribution network (CDN) providers as a critical component of the Internet, WCD puts a substantial population of Internet users at risk.
We present the first large-scale study that quantifies the prevalence of WCD in 340 high-profile sites among the Alexa Top 5K. Our analysis reveals WCD vulnerabilities that leak private user data as well as secret authentication and authorization tokens that can be leveraged by an attacker to mount damaging web application attacks. Furthermore, we explore WCD in a scientific framework as an instance of the path confusion class of attacks, and demonstrate that variations on the path confusion technique used make it possible to exploit sites that are otherwise not impacted by the original attack. Our findings show that many popular sites remain vulnerable two years after the public disclosure of WCD.
Our empirical experiments with popular CDN providers underline the fact that web caches are not plug & play technologies. In order to mitigate WCD, site operators must adopt a holistic view of their web infrastructure and carefully configure cache settings appropriate for their applications.Other authorsSee publication -
HotFuzz: Discovering Algorithmic Denial-of-Service Vulnerabilities Through Guided Micro-Fuzzing
Network and Distributed System Security Symposium (NDSS)
Contemporary fuzz testing techniques focus on identifying memory corruption vulnerabilities that allow adversaries to achieve either remote code execution or information disclosure. Meanwhile, Algorithmic Complexity (AC)vulnerabilities, which are a common attack vector for denial-of-service attacks, remain an understudied threat. In this paper, we present HotFuzz, a framework for automatically discovering AC vulnerabilities in Java libraries. HotFuzz uses micro-fuzzing, a genetic algorithm that…
Contemporary fuzz testing techniques focus on identifying memory corruption vulnerabilities that allow adversaries to achieve either remote code execution or information disclosure. Meanwhile, Algorithmic Complexity (AC)vulnerabilities, which are a common attack vector for denial-of-service attacks, remain an understudied threat. In this paper, we present HotFuzz, a framework for automatically discovering AC vulnerabilities in Java libraries. HotFuzz uses micro-fuzzing, a genetic algorithm that evolves arbitrary Java objects in order to trigger the worst-case performance for a method under test. We define Small Recursive Instantiation (SRI) as a technique to derive seed inputs represented as Java objects to micro-fuzzing. After micro-fuzzing, HotFuzz synthesizes test cases that triggered AC vulnerabilities into Java programs and monitors their execution in order to reproduce vulnerabilities outside the fuzzing framework. HotFuzz outputs those programs that exhibit high CPU utilization as witnesses for AC vulnerabilities in a Java library. We evaluate HotFuzz over the Java Runtime Environment (JRE), the 100 most popular Java libraries on Maven, and challenges contained in the DARPA Space and Time Analysis for Cybersecurity (STAC) program. We evaluate SRI's effectiveness by comparing the performance of micro-fuzzing with SRI, measured by the number of AC vulnerabilities detected, to simply using empty values as seed inputs. In this evaluation, we verified known AC vulnerabilities, discovered previously unknown AC vulnerabilities that we responsibly reported to vendors, and received confirmation from both IBM and Oracle. Our results demonstrate that micro-fuzzing finds AC vulnerabilities in real-world software, and that micro-fuzzing with SRI-derived seed inputs outperforms using empty values.
Other authorsSee publication -
A Longitudinal Analysis of the ads.txt Standard
ACM Internet Measurement Conference (IMC)
Programmatic advertising provides digital ad buyers with the convenience of purchasing ad impressions through Real Time Bidding (RTB) auctions. However, programmatic advertising has also given rise to a novel form of ad fraud known as domain spoofing, in which attackers sell counterfeit impressions that claim to be from high-value publishers. To mitigate domain spoofing, the Interactive Advertising Bureau (IAB) Tech Lab introduced the ads.txt standard in May 2017 to help ad buyers verify…
Programmatic advertising provides digital ad buyers with the convenience of purchasing ad impressions through Real Time Bidding (RTB) auctions. However, programmatic advertising has also given rise to a novel form of ad fraud known as domain spoofing, in which attackers sell counterfeit impressions that claim to be from high-value publishers. To mitigate domain spoofing, the Interactive Advertising Bureau (IAB) Tech Lab introduced the ads.txt standard in May 2017 to help ad buyers verify authorized digital ad sellers, as well as to promote overall transparency in programmatic advertising.
In this work, we present a 15-month longitudinal, observational study of the ads.txt standard. We do this to understand (1) if it is helping ad buyers to combat domain spoofing and (2) whether the transparency offered by the standard can provide useful data to researchers and privacy advocates.
With respect to halting domain spoofing, we observe that over 60% of Alexa Top-100K publishers have adopted ads.txt, and that ad exchanges and advertisers appear to be honoring the standard. With respect to transparency, the widespread adoption of ads.txt allows us to explicitly identify over 1,000 domains belonging to ad exchanges, without having to rely on crowdsourcing or heuristic methods.
However, we also find that ads.txt is still a long way from reaching its full potential. Many publishers have yet to adopt the standard, and we observe major ad exchanges purchasing unauthorized impressions that violate the standard. This opens the door to domain spoofing attacks. Further, ads.txt data often includes errors that must be cleaned and mitigated before the data is practically useful.Other authorsSee publication -
Understanding and Mitigating the Security Risks of Content Inclusion in Web Browsers
Doctor of Philosophy (PhD) Thesis
Thanks to the wide range of features offered by web browsers, modern websites include various types of content such as JavaScript and CSS in order to create interactive user interfaces. Browser vendors also provided extensions to enhance web browsers with additional useful capabilities that are not necessarily maintained or supported by default.
However, included content can introduce security risks to users of these websites, unbeknownst to both website operators and users. In addition,…Thanks to the wide range of features offered by web browsers, modern websites include various types of content such as JavaScript and CSS in order to create interactive user interfaces. Browser vendors also provided extensions to enhance web browsers with additional useful capabilities that are not necessarily maintained or supported by default.
However, included content can introduce security risks to users of these websites, unbeknownst to both website operators and users. In addition, the browser's interpretation of the resource URLs may be very different from how the web server resolves the URL to determine which resource should be returned to the browser. The URL may not correspond to an actual server-side file system structure at all, or the web server may internally rewrite parts of the URL. This semantic disconnect between web browsers and web servers in interpreting relative paths (path confusion) could be exploited by Relative Path Overwrite (RPO). On the other hand, even tough extensions provide useful additional functionality for web browsers, they are also an increasingly popular vector for attacks. Due to the high degree of privilege extensions can hold, extensions have been abused to inject advertisements into web pages that divert revenue from content publishers and potentially expose users to malware.
In this thesis, I propose novel research into understanding and mitigating the security risks of content inclusion in web browsers to protect website publishers as well as their users. First, I introduce an in-browser approach called Excision to automatically detect and block malicious third-party content inclusions as web pages are loaded into the user's browser or during the execution of browser extensions. Then, I propose OriginTracer, an in-browser approach to highlight extension-based content modification of web pages. Finally, I present the first in-depth study of style injection vulnerability using RPO and discuss potential countermeasures.
-
On the Effectiveness of Type-based Control Flow Integrity
Annual Computer Security Applications Conference (ACSAC)
Control flow integrity (CFI) has received significant attention in the community to combat control hijacking attacks in the presence of memory corruption vulnerabilities. The challenges in creating a practical CFI has resulted in the development of a new type of CFI based on runtime type checking (RTC). RTC-based CFI has been implemented in a number of recent practical efforts such as GRSecurity Reuse Attack Protector (RAP) and LLVM-CFI. While there has been a number of previous efforts that…
Control flow integrity (CFI) has received significant attention in the community to combat control hijacking attacks in the presence of memory corruption vulnerabilities. The challenges in creating a practical CFI has resulted in the development of a new type of CFI based on runtime type checking (RTC). RTC-based CFI has been implemented in a number of recent practical efforts such as GRSecurity Reuse Attack Protector (RAP) and LLVM-CFI. While there has been a number of previous efforts that studied the strengths and limitations of other types of CFI techniques, little has been done to evaluate the RTC-based CFI. In this work, we study the effectiveness of RTC from the security and practicality aspects. From the security perspective, we observe that type collisions are abundant in sufficiently large code bases but exploiting them to build a functional attack is not straightforward. Then we show how an attacker can successfully bypass RTC techniques using a variant of ROP attacks that respect type checking (called TROP) and also built two proof-of-concept exploits, one against Nginx web server and the other against Exim mail server. We also discuss practical challenges of implementing RTC. Our findings suggest that while RTC is more practical for applying CFI to large code bases, its policy is not strong enough when facing a motivated attacker.
Other authorsSee publication -
How Tracking Companies Circumvented Ad Blockers Using WebSockets
ACM Internet Measurement Conference (IMC)
In this study of 100,000 websites, we document how Advertising and Analytics (A&A) companies have used WebSockets to bypass ad blocking, exfiltrate user tracking data, and deliver advertisements. Specifically, our measurements investigate how a long-standing bug in Chrome’s (the world’s most popular browser) chrome.webRequest API prevented blocking extensions from being able to interpose on WebSocket connections. We conducted large-scale crawls of top publishers before and after this bug was…
In this study of 100,000 websites, we document how Advertising and Analytics (A&A) companies have used WebSockets to bypass ad blocking, exfiltrate user tracking data, and deliver advertisements. Specifically, our measurements investigate how a long-standing bug in Chrome’s (the world’s most popular browser) chrome.webRequest API prevented blocking extensions from being able to interpose on WebSocket connections. We conducted large-scale crawls of top publishers before and after this bug was patched in April 2017 to examine which A&A companies were using WebSockets, what information was being transferred, and whether companies altered their behavior after the patch. We find that a small but persistent group of A&A companies use WebSockets, and that several of them engaged in troubling behavior, such as browser fingerprinting, exfiltrating the DOM, and serving advertisements, that would have circumvented blocking due to the Chrome bug.
Other authorsSee publication -
How Tracking Companies Circumvent Ad Blockers Using WebSockets
IEEE S&P Workshop on Technology and Consumer Protection (ConPro)
In this study of 100,000 websites, we document how Advertising and Analytics (A&A) companies have used WebSockets to bypass ad blocking, exfiltrate user tracking data, and deliver advertisements. Specifically, we leverage a longstanding bug in Chrome (the world’s most popular browser) in the chrome.webRequest API that prevented blocking extensions from being able to interpose on WebSocket connections. We conducted large-scale crawls of top publishers before and after this bug was patched in…
In this study of 100,000 websites, we document how Advertising and Analytics (A&A) companies have used WebSockets to bypass ad blocking, exfiltrate user tracking data, and deliver advertisements. Specifically, we leverage a longstanding bug in Chrome (the world’s most popular browser) in the chrome.webRequest API that prevented blocking extensions from being able to interpose on WebSocket connections. We conducted large-scale crawls of top publishers before and after this bug was patched in April 2017 to examine which A&A companies were using WebSockets, what information was being transferred, and whether companies altered their behavior after the patch. We find that a small but persistent group of A&A companies use WebSockets, and that several of them are engaging in troubling behavior, such as browser fingerprinting, exfiltrating the DOM, and serving advertisements, that would have circumvented blocking due to the Chrome bug.
Other authorsSee publication -
Large-Scale Analysis of Style Injection by Relative Path Overwrite (Honorable Mention)
The Web Conference (WWW)
Relative Path Overwrite (RPO) is a recent technique to inject style directives into websites even when no style sink or markup injection vulnerability is present. It exploits differences in how browsers and web servers interpret relative paths (i.e., path confusion) to make a HTML page reference itself as a stylesheet; a simple text injection vulnerability along with browsers’ leniency in parsing CSS resources results in an attacker’s ability to inject style directives that will be interpreted…
Relative Path Overwrite (RPO) is a recent technique to inject style directives into websites even when no style sink or markup injection vulnerability is present. It exploits differences in how browsers and web servers interpret relative paths (i.e., path confusion) to make a HTML page reference itself as a stylesheet; a simple text injection vulnerability along with browsers’ leniency in parsing CSS resources results in an attacker’s ability to inject style directives that will be interpreted by the browser. Even though style injection may appear less serious a threat than script injection, it has been shown that it enables a range of attacks, including secret exfiltration.
In this paper, we present the first large-scale study of the Web to measure the prevalence and significance of style injection using RPO. Our work shows that around 9 % of the websites in the Alexa Top 10,000 contain at least one vulnerable page, out of which more than one third can be exploited. We analyze in detail various impediments to successful exploitation, and make recommendations for remediation. In contrast to script injection, relatively simple countermeasures exist to mitigate style injection. However, there appears to be little awareness of this attack vector as evidenced by a range of popular Content Management Systems (CMSes) that we found to be exploitable.Other authorsSee publication -
Thou Shalt Not Depend on Me: Analysing the Use of Outdated JavaScript Libraries on the Web
Network and Distributed System Security Symposium (NDSS)
Web developers routinely rely on third-party JavaScript libraries such as jQuery to enhance the functionality of their sites. However, if not properly maintained, such dependencies can create attack vectors allowing a site to be compromised.
In this paper, we conduct the first comprehensive study of client-side JavaScript library usage and the resulting security implications across the Web. Using data from over 133 k websites, we show that 37% of them include at least one library with a…Web developers routinely rely on third-party JavaScript libraries such as jQuery to enhance the functionality of their sites. However, if not properly maintained, such dependencies can create attack vectors allowing a site to be compromised.
In this paper, we conduct the first comprehensive study of client-side JavaScript library usage and the resulting security implications across the Web. Using data from over 133 k websites, we show that 37% of them include at least one library with a known vulnerability; the time lag behind the newest release of a library is measured in the order of years. In order to better understand why websites use so many vulnerable or outdated libraries, we track causal inclusion relationships and quantify different scenarios. We observe sites including libraries in ad hoc and often transitive ways, which can lead to different versions of the same library being loaded into the same document at the same time. Furthermore, we find that libraries included transitively, or via ad and tracking code, are more likely to be vulnerable. This demonstrates that not only website administrators, but also the dynamic architecture and developers of third-party services are to blame for the Web’s poor state of library management.
The results of our work underline the need for more thorough approaches to dependency management, code maintenance and third-party code inclusion on the Web.
Other authorsSee publication -
"Recommended For You": A First Look at Content Recommendation Networks
ACM Internet Measurement Conference (IMC)
One advertising format that has grown significantly in recent years are known as Content Recommendation Networks (CRNs). CRNs are responsible for the widgets full of links that appear under headlines like “Recommended For You” and “Things You Might Like”. Although CRNs have become quite popular with publishers, users complain about the low-quality of content promoted by CRNs, while regulators in the US and Europe have faulted CRNs for failing to label sponsored links as…
One advertising format that has grown significantly in recent years are known as Content Recommendation Networks (CRNs). CRNs are responsible for the widgets full of links that appear under headlines like “Recommended For You” and “Things You Might Like”. Although CRNs have become quite popular with publishers, users complain about the low-quality of content promoted by CRNs, while regulators in the US and Europe have faulted CRNs for failing to label sponsored links as advertisements.
In this study, we present a first look at five of the largest CRNs, including their footprint on the web, how their recommendations are labeled, and who their advertisers are. Our findings reveal that CRNs still fail to prominently disclose the paid nature of their sponsored content. This suggests that additional intervention is necessary to promote accepted best-practices in the nascent CRN marketplace, and ultimately protect online users.
Other authorsSee publication -
Identifying Extension-based Ad Injection via Fine-grained Web Content Provenance
International Symposium on Research in Attacks, Intrusions and Defenses (RAID)
Extensions provide useful additional functionality for web browsers, but are also an increasingly popular vector for attacks. Due to the high degree of privilege extensions can hold, extensions have been abused to inject advertisements into web pages that divert revenue from content publishers and potentially expose users to malware. Users are often unaware of such practices, believing the modifications to the page originate from publishers. Additionally, automated identification of unwanted…
Extensions provide useful additional functionality for web browsers, but are also an increasingly popular vector for attacks. Due to the high degree of privilege extensions can hold, extensions have been abused to inject advertisements into web pages that divert revenue from content publishers and potentially expose users to malware. Users are often unaware of such practices, believing the modifications to the page originate from publishers. Additionally, automated identification of unwanted third-party modifications is fundamentally difficult, as users are the ultimate arbiters of whether content is undesired in the absence of outright malice.
To resolve this dilemma, we present a fine-grained approach to tracking the provenance of web content at the level of individual DOM elements. In conjunction with visual indicators, provenance information can be used to reliably determine the source of content modifications, distinguishing publisher content from content that originates from third parties such as extensions. We describe a prototype implementation of the approach called OriginTracer for Chromium, and evaluate its effectiveness, usability, and performance overhead through a user study and automated experiments. The results demonstrate a statistically significant improvement in the ability of users to identify unwanted third-party content such as injected ads with modest performance overhead.Other authorsSee publication -
Tracing Information Flows Between Ad Exchanges Using Retargeted Ads
USENIX Security Symposium
Numerous surveys have shown that Web users are concerned about the loss of privacy associated with online tracking. Alarmingly, these surveys also reveal that people are also unaware of the amount of data sharing that occurs between ad exchanges, and thus underestimate the privacy risks associated with online tracking.
In reality, the modern ad ecosystem is fueled by a flow of user data between trackers and ad exchanges. Although recent work has shown that ad exchanges routinely perform…Numerous surveys have shown that Web users are concerned about the loss of privacy associated with online tracking. Alarmingly, these surveys also reveal that people are also unaware of the amount of data sharing that occurs between ad exchanges, and thus underestimate the privacy risks associated with online tracking.
In reality, the modern ad ecosystem is fueled by a flow of user data between trackers and ad exchanges. Although recent work has shown that ad exchanges routinely perform cookie matching with other exchanges, these studies are based on brittle heuristics that cannot detect all forms of information sharing, especially under adversarial conditions.
In this study, we develop a methodology that is able to detect client- and server-side flows of information between arbitrary ad exchanges. Our key insight is to leverage retargeted ads as a tool for identifying information flows. Intuitively, our methodology works because it relies on the semantics of how exchanges serve ads, rather than focusing on specific cookie matching mechanisms. Using crawled data on 35,448 ad impressions, we show that our methodology can successfully categorize four different kinds of information sharing behavior between ad exchanges, including cases where existing heuristic methods fail.
We conclude with a discussion of how our findings and methodologies can be leveraged to give users more control over what kind of ads they see and how their information is shared between ad exchanges.Other authorsSee publication -
UNVEIL: A Large-Scale, Automated Approach to Detecting Ransomware
USENIX Security Symposium
Although the concept of ransomware is not new (i.e., such attacks date back at least as far as the 1980s), this type of malware has recently experienced a resurgence in popularity. In fact, in 2014 and 2015, a number of high-profile ransomware attacks were reported, such as the large-scale attack against Sony that prompted the company to delay the release of the film "The Interview". Ransomware typically operates by locking the desktop of the victim to render the system inaccessible to the…
Although the concept of ransomware is not new (i.e., such attacks date back at least as far as the 1980s), this type of malware has recently experienced a resurgence in popularity. In fact, in 2014 and 2015, a number of high-profile ransomware attacks were reported, such as the large-scale attack against Sony that prompted the company to delay the release of the film "The Interview". Ransomware typically operates by locking the desktop of the victim to render the system inaccessible to the user, or by encrypting, overwriting, or deleting the user's files. However, while many generic malware detection systems have been proposed, none of these systems have attempted to specifically address the ransomware detection problem.
In this paper, we present a novel dynamic analysis system called UNVEIL that is specifically designed to detect ransomware. The key insight of the analysis is that in order to mount a successful attack, ransomware must tamper with a user's files or desktop. UNVEIL automatically generates an artificial user environment, and detects when ransomware interacts with user data. In parallel, the approach tracks changes to the system's desktop that indicate ransomware-like behavior. Our evaluation shows that UNVEIL significantly improves the state of the art, and is able to identify previously unknown evasive ransomware that was not detected by the anti-malware industry.Other authorsSee publication -
Include Me Out: In-Browser Detection of Malicious Third-Party Content Inclusions
International Conference on Financial Cryptography and Data Security (FC)
Modern websites include various types of third-party content such as JavaScript, images, stylesheets, and Flash objects in order to create interactive user interfaces. In addition to explicit inclusion of third-party content by website publishers, ISPs and browser extensions are hijacking web browsing sessions with increasing frequency to inject third-party content (e.g., ads). However, third-party content can also introduce security risks to users of these websites, unbeknownst to both website…
Modern websites include various types of third-party content such as JavaScript, images, stylesheets, and Flash objects in order to create interactive user interfaces. In addition to explicit inclusion of third-party content by website publishers, ISPs and browser extensions are hijacking web browsing sessions with increasing frequency to inject third-party content (e.g., ads). However, third-party content can also introduce security risks to users of these websites, unbeknownst to both website operators and users. Because of the often highly dynamic nature of these inclusions as well as the use of advanced cloaking techniques in contemporary malware, it is exceedingly difficult to preemptively recognize and block inclusions of malicious third-party content before it has the chance to attack the user’s system.
In this paper, we propose a novel approach to achieving the goal of preemptive blocking of malicious third-party content inclusion through an analysis of inclusion sequences on the Web. We implemented our approach, called Excision, as a set of modifications to the Chromium browser that protects users from malicious inclusions while web pages load. Our analysis suggests that by adopting our in-browser approach, users can avoid a significant portion of malicious third-party content on the Web. Our evaluation shows that Excision effectively identifies malicious content while introducing a low false positive rate. Our experiments also demonstrate that our approach does not negatively impact a user’s browsing experience when browsing popular websites drawn from the Alexa Top 500.Other authorsSee publication -
Alert Correlation Algorithms: A Survey and Taxonomy
Symposium on Cyberspace Safety and Security (CSS), Lecture Notes in Computer Science, Springer International Publishing, vol 8300, pp 183-197
Alert correlation is a system which receives alerts from heterogeneous Intrusion Detection Systems and reduces false alerts, detects high level patterns of attacks, increases the meaning of occurred incidents, predicts the future states of attacks, and detects root cause of attacks. To reach these goals, many algorithms have been introduced in the world with many advantages and disadvantages.
In this paper, we are trying to present a comprehensive survey on already proposed alert…Alert correlation is a system which receives alerts from heterogeneous Intrusion Detection Systems and reduces false alerts, detects high level patterns of attacks, increases the meaning of occurred incidents, predicts the future states of attacks, and detects root cause of attacks. To reach these goals, many algorithms have been introduced in the world with many advantages and disadvantages.
In this paper, we are trying to present a comprehensive survey on already proposed alert correlation algorithms. The approach of this survey is mainly focused on algorithms in correlation engines which can work in enterprise and practical networks. Having this aim in mind, many features related to accuracy, functionality, and computation power are introduced and all algorithm categories are assessed with these features. The result of this survey shows that each category of algorithms has its own strengths and an ideal correlation frameworks should be carried the strength feature of each category.Other authors -
A Comprehensive Approach to Abusing Locality in Shared Web Hosting Servers
IEEE Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)
With the growing of network technology along with the need of human for social interaction, using websites nowadays becomes critically important which leads in the increasing number of websites and servers. One popular solution for managing these large numbers of websites is using shared web hosting servers in order to decrease the overall cost of server maintenance. Despite affordability, this solution is insecure and risky according to high amount of reported defaces and attacks during recent…
With the growing of network technology along with the need of human for social interaction, using websites nowadays becomes critically important which leads in the increasing number of websites and servers. One popular solution for managing these large numbers of websites is using shared web hosting servers in order to decrease the overall cost of server maintenance. Despite affordability, this solution is insecure and risky according to high amount of reported defaces and attacks during recent years.
In this paper, we introduce top ten most common attacks in shared web hosting servers which can occur because of the nature and bad configuration in these servers. Moreover, we present several simple scenarios that are capable of penetrating these kinds of servers even with the existence of several securing mechanisms. Finally, we provide a comprehensive secure configuration for confronting these attacks.Other authorsSee publication -
Two Novel Server-Side Attacks against Log File in Shared Web Hosting Servers
IEEE Conference for Internet Technology and Secured Transactions (ICITST)
Shared Web Hosting service enables hosting multitude of websites on a single powerful server. It is a well-known solution as many people share the overall cost of server maintenance and also, website owners do not need to deal with administration issues is not necessary for website owners.
In this paper, we illustrate how shared web hosting service works and demonstrate the security weaknesses rise due to the lack of proper isolation between different websites, hosted on the same server.…Shared Web Hosting service enables hosting multitude of websites on a single powerful server. It is a well-known solution as many people share the overall cost of server maintenance and also, website owners do not need to deal with administration issues is not necessary for website owners.
In this paper, we illustrate how shared web hosting service works and demonstrate the security weaknesses rise due to the lack of proper isolation between different websites, hosted on the same server. We exhibit two new server-side attacks against the log file whose objectives are revealing information of other hosted websites which are considered to be private and arranging other complex attacks. In the absence of isolated log files among websites, an attacker controlling a website can inspect and manipulate contents of the log file. These attacks enable an attacker to disclose file and directory structure of other websites and launch other sorts of attacks. Finally, we propose several countermeasures to secure shared web hosting servers against the two attacks subsequent to the separation of log files for each website.Other authorsSee publication -
Performance Evaluation of Shared Hosting Security Methods
IEEE Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)
Shared hosting is a kind of web hosting in which multiple websites reside on one webserver. It is cost-effective and makes the administration easier for websites' owners. However, shared hosting has some performance and security issues. In default shared hosting configuration, all websites’ scripts are executed under the webserver’s user account regardless of their owners. Therefore, a website is able to access other websites’ resources. This security problem arises from lack of proper…
Shared hosting is a kind of web hosting in which multiple websites reside on one webserver. It is cost-effective and makes the administration easier for websites' owners. However, shared hosting has some performance and security issues. In default shared hosting configuration, all websites’ scripts are executed under the webserver’s user account regardless of their owners. Therefore, a website is able to access other websites’ resources. This security problem arises from lack of proper isolation between different websites hosted on the same webserver.
In this survey, we have examined different methods for handling mentioned security issue. Also we evaluated the performance of mentioned methods. Finally, we evaluated performance of these methods with various configurations.Other authorsSee publication -
An Anomaly-based Botnet Detection Approach for Identifying Stealthy Botnets
IEEE Conference on Computer Applications and Industrial Electronics (ICCAIE)
Botnets (networks of compromised computers) are often used for malicious activities such as spam, click fraud, identity theft, phishing, and distributed denial of service (DDoS) attacks. Most of previous researches have introduced fully or partially signature-based botnet detection approaches.
In this paper, we propose a fully anomaly-based approach that requires no a priori knowledge of bot signatures, botnet C&C protocols, and C&C server addresses. We start from inherent…Botnets (networks of compromised computers) are often used for malicious activities such as spam, click fraud, identity theft, phishing, and distributed denial of service (DDoS) attacks. Most of previous researches have introduced fully or partially signature-based botnet detection approaches.
In this paper, we propose a fully anomaly-based approach that requires no a priori knowledge of bot signatures, botnet C&C protocols, and C&C server addresses. We start from inherent characteristics of botnets. Bots connect to the C&C channel and execute the received commands. Bots belonging to the same botnet receive the same commands that causes them having similar netflows characteristics and performing same attacks. Our method clusters bots with similar netflows and attacks in different time windows and perform correlation to identify bot infected hosts. We have developed a prototype system and evaluated it with real-world traces including normal traffic and several real-world botnet traces. The results show that our approach has high detection accuracy and low false positive.Other authorsSee publication -
A Disk Scheduling Algorithm Based on ANT Colony Optimization
ISCA Conference on Parallel and Distributed Computing and Communication Systems (PDCCS)
Audio, animations and video belong to a class of data known as delay sensitive because they are sensitive to delays in presentation to the users. Also, because of huge data in such items, disk is an important device in managing them. In order to have an acceptable presentation, disk requests deadlines must be met, and a real-time scheduling approach should be used to guarantee the timing requirements for such environment. However, some disk scheduling algorithms have been proposed since now to…
Audio, animations and video belong to a class of data known as delay sensitive because they are sensitive to delays in presentation to the users. Also, because of huge data in such items, disk is an important device in managing them. In order to have an acceptable presentation, disk requests deadlines must be met, and a real-time scheduling approach should be used to guarantee the timing requirements for such environment. However, some disk scheduling algorithms have been proposed since now to optimize scheduling real-time disk requests, but improving the results is a challenge yet.
In this paper, we propose a new disk scheduling method based on Ant Colony Optimization (ACO) approach. In this approach, ACO models the tasks and finds the best sequence to minimize number of missed tasks and maximize throughput. Experimental results showed that the proposed method worked very well and excelled other related ones in terms of miss ratio and throughput in most cases.Other authorsSee publication
Honors & Awards
-
Student Travel Grant
Annual Computer Security Applications Conference (ACSAC)
-
Student Travel Grant
ACM Internet Measurement Conference (IMC)
-
Student Travel Grant
Network and Distributed System Security Symposium (NDSS)
-
Student Travel Grant
ACM Internet Measurement Conference (IMC)
-
Student Travel Grant
USENIX Security Symposium
Other similar profiles
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More