Search

Current filters:

Search Results

Item hits:
  • Thesis


  • Authors: Guirguis, Mina;  Co-Author: 2006 (One important consideration in realizing dependable computing systems and networks is to uncover vulnerabilities in their designs to adversarial attacks. Currently, the designs of these systems employ different forms of adaptation mechanisms in order to optimize their performance by ensuring that desirable properties, such as stability, efficiency and fairness, are not compromised. This thesis discovers and studies a new type of adversarial attacks that target such adaptation mechanisms by exploiting their dynamics of operation - i.e., the characteristics of their transient behavior. We coin this new breed of adversarial attacks, Reduction of Quality (RoQ) attacks. The premise of RoQ attacks is to keep an adaptive mechanism constantly in a transient state, effectively depriving the system from much of its capacity and significantly reducing its service quality. In this thesis we develop a general control-theoretic framework that provides a uni¬fied approach to modeling and vulnerability assessment of the dynamics underlying RoQ exploits. Within this framework, we introduce and formalize the notion of an attack "Po¬tency" that capitalizes on the attacker's best incentive: maximizing the marginal utility of its attack traffic. Unlike traditional brute-force Denial of Service attacks that aim to take down a system at any cost, RoQ attacks aim to maximize the damage inflicted on a system through consuming an innocuous, small fraction of that system's hijacked capacity. We instantiate our framework using detailed analytical models and associated metrics on a series of adaptation mechanisms that are commonly used in networking protocols, end-system admission controllers and load balancers. We assess the impact of RoQ attacks using analysis, simulations, and Internet experiments. We identify key factors that expose the tradeoffs between resilience and susceptibility to RoQ attacks. These factors could be used to harden adaptation mechanisms against RoQ exploits, in addition to developing new forms of countermeasures and defense mechanisms.)

  • Thesis


  • Authors: Pascu, Stefan;  Co-Author: 2006 (In this thesis we propose a novel way to achieve global network information dissemination in which some wavelengths are reserved exclusively for global control information exchange. We study the routing and wavelength assignment problem for the special communication pattern of non-blocking all-to-all broadcast in WDM optical networks. We provide efficient solutions to reduce the number of wavelengths needed for non-blocking all-to-all broadcast, in the absence of wavelength converters, for network information dissemination. We adopt an approach in which we consider all nodes to be tap-and-continue capable thus studying lighttrees rather than lightpaths. To the best of our knowledge, this thesis is the first to consider “tap-and-continue” capable nodes in the context of conflict-free all-to-all broadcast. The problem of all to-all broadcast using individual lightpaths has been proven to be an NP-complete problem [6]. We provide optimal RWA solutions for conflict-free all-to-all broadcast for some particular cases of regular topologies, namely the ring, the torus and the hypercube. We make an important contribution on hypercube decomposition into edge-disjoint structures. We also present near-optimal polynomial-time solutions for the general case of arbitrary topologies. Furthermore, we apply for the first time the “cactus” representation of all minimum edge-cuts of graphs with arbitrary topologies to the problem of all-to-all broadcast in optical networks. Using this representation recursively we obtain near-optimal results for the number of wavelengths needed by the non-blocking all-to-all broadcast. The second part of this thesis focuses on the more practical case of multi-hop RWA for non-blocking all-to-all broadcast in the presence of Optical-Electrical-Optical conversion. We propose two simple but efficient multi-hop RWA models. In addition to reducing the number of wave-lengths we also concentrate on reducing the number of optical receivers, another important optical resource. We analyze these models on the ring and the hypercube, as special cases of regular topologies. Lastly, we develop a good upper-bound on the number of wavelengths in the case of non-blocking multi-hop all-to-all broadcast on networks with arbitrary topologies and offer a heuristic algorithm to achieve it. We propose a novel network partitioning method based on “virtual perfect matching” for use in the RWA heuristic algorithm.)

  • Thesis


  • Authors: Lakhina, Anukool;  Co-Author: 2006 (To attack this problem we adopt the general strategy of seeking low-dimensional approximations that preserve important traffic properties. Our starting point, and the first contribution of this dissertation, is to demonstrate that accurate low-dimensional approximations of network traffic often exist. We show that network-wide traffic measurements that exhibit as many as hundreds of dimensions can be approximated well using a much smaller set of dimensions (for example, less than ten). This observation of low effective dimensionality is key, and provides leverage on a number of problems related to network operations. In particular, low effective dimensionality leads us to make use of subspace methods. These methods systematically exploit the low dimensionality of multi-feature traffic flows, to capture network-wide normal behavior, and to expose anomalous events that span a network. We consider two basic kinds of anomalies: volume anomalies, and general anomalies. Volume anomalies are unusual and significant changes in a network's traffic levels that can often involve multiple links, while general anomalies include a range of unusual events that do not necessarily disturb traffic volume, such as port scans, network scans, user experiments and high-rate flows. Our second contribution is to show that in the case of volume anomalies, applying subspace methods to simple traffic measurements from all links one can: (1) accurately detect when a volume anomaly is occurring; (2) correctly identify the underlying origin-destination (OD) traffic flow which is the source of the anomaly; and (3) accurately estimate the amount of traffic involved in the anomalous OD flow. In the case of general anomalies, we show that the distributions of packet features (IP ad-dresses and ports) observed in network-wide flow traces reveals both the presence and the nature of a wide range of anomalies. Our third contribution is to show that by using entropy as a summarization tool, network-wide analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters, which can be used to automatically classify anomalies and to uncover new anomaly types. Finally, our fourth contribution concerns methods for estimating traffic matrices from readily available link traffic. We show that in settings where partial flow measurements are possible, the low dimensionality of traffic flows can be exploited to transform the ill-posed matrix estimation problem into a problem that is amenable to direct solution methods. We conclude that multivariate traffic analysis methods show considerable promise as tools for a wide range of current challenges in network analysis.)

  • Thesis


  • Authors: Jang, Myeong-Wuk;  Co-Author: 2006 (The growth of the computational power of computers and the speed of networks has made large-scale multi-agent systems a promising technology. As the number of agents in a single application approaches thousands or millions, distributed computing has become a general paradigm in large-scale multi-agent systems to take the benefits of parallel computing. However, since these numerous agents are located on distributed computers and interact intensively with each other to achieve common goals, the agent communication cost significantly affects the performance of applications. Therefore, optimizing the agent communication cost on distributed systems could considerably reduce the runtime of multi-agent applications. Furthermore, because static multi-agent frameworks may not be suitable for all kinds of applications, and the communication patterns of agents may change during execution, multi-agent frameworks should adapt their services to support applications differently according to their dynamic characteristics. This thesis proposes three adaptive services at the agent framework level to reduce the agent communication and coordination cost of large-scale multi-agent applications. First, communication locality-aware agent distribution aims at minimizing inter-node communication by collocating heavily communicating agents on the same platform and maintaining agent group-based load sharing. Second, application agent-oriented middle agent services attempt to optimize agent interaction through middle agents by executing application agent-supported search algorithms on the middle agent address space. Third, message passing for mobile agents aims at reducing the time of message delivery to mobile agents using location caches or by extending the agent address scheme with location information. With these services, we have achieved very impressive experimental results in large-scale UAV simulations including up to 10,000 agents. Also, we have provided a formal definition of our framework and services with operational semantics.)

  • Thesis


  • Authors: Morgan, Grayson B.;  Co-Author: 2006 (As the Department of Defense (DoD) moves to create decision superiority through the use of network centric operations, decision-makers become increasingly exposed to data quality vulnerabilities that mimic integrity failures within the Information Assurance (IA) security program. This outcome is a result of the current IA enforcement mechanism known as Defense Information Assurance Certification and Accreditation Process (DIACAP) not begin designed to mitigate risks involving data quality (accuracy, relevance, timeliness, usability, completeness, brevity, and security) beyond those associated with security. Misconceptions and the uncertainty created by the inability of decision-makers to distinguish between poor quality data and breaches of IA integrity lead to a reduced trust of critical DoD information systems. This research presents a modified Action Research process to expand IA risk evaluation criteria beyond the current security paradigm with the purpose of identifying undocumented data quality based risks.)

  • Thesis


  • Authors: Kothapalli, Kishore;  Co-Author: 2006 (In this age of information, new models of information exchange methodologies based on overlay networks are gaining popular attention. Overlay networks provide a logical interconnection topology over an existing physical network. Overlay networks offer bene¬fits such as ease of implementation, flexibility, adaptability, and incremental deployability. Due to the wide range of applications and advantages, formal study of overlay networks is required to understand the various research challenges in this context. In this thesis, we study two classes of overlay networks namely peer-to-peer networks and wireless ad hoc networks. Our focus will be along two central issues in overlay net works: how to arrive at efficient topologies and how to provide efficient routing strategies. Peer-to-peer networks have gained a lot of research attention in recent years for various reasons. Despite many advances however, fundamental questions such as how to design deterministic constructions, and how to organize peers of non-uniform bandwidth have remained open. In this thesis, we answer these questions by providing a deterministic overlay topology, Pagoda, that can be used for efficient routing, data management and multicasting. Given the difficulty of arriving at good deterministic topologies in a purely decentralized manner, we also propose a unified methodology to create a large class of overlay topologies via an approach called the supervised overlay networks. We show that this approach also has other advantages such as support for rapid peer join/leave and rapid repair. For the case of wireless ad hoc networks, we start by providing a model for wireless communication that is much more realistic than the models that are being used in the theoretical community. Using this model, we show how to arrive a sparse spanner construction based on dominating sets. We then use the spanner construction to provide efficient algorithms for broadcasting and information gathering in wireless ad hoc networks. All our algorithms are simple, self-stabilizing and require only a constant amount of storage at any node. Thus, our algorithms are also applicable in a wide variety of scenarios such as simple sensor devices.)

  • Thesis


  • Authors: Dang, Jiangbo;  Co-Author: 2006 (Due to the convergence of industrial demands for business-process and supply-chain management and recent results in multiagent systems, autonomous software services, and the Semantic Web, Web services are becoming the main focus for the next generation of the Internet. They will behave like intelligent agents by composing themselves cooperatively into workflows. A workflow is a set of services that execute by carrying out specified control and data flows. Agents are persistent active entities that can perceive, reason, and act in their environment, and communicate with other agents. Agents can interact autonomously across enterprise boundaries and, when thought of as services, provide a new way to achieve programming-in-the-large. Agents interact with other agents through negotiation since they are autonomous entities. Negotiation is a technique for reaching mutually beneficial agreement through com¬munication among agents. Agents can autonomously negotiate and coordinate with others to execute a workflow in a heterogeneous system by agreeing on functional and quality metrics of the services they request and provide. The focus of this dissertation is on negotiating and coordinating a composed service represented as a workflow among self-interested service agents in a competitive service-oriented environment. By balancing between optimized utility and computation cost, I introduce an optimal strategy for multiple-issue negotiation between interacting agents with bounded rationality. As the number of services available on the Web proliferates, it is very likely that multiple service requestors and providers will be negotiating simultaneously and competitively. I present a protocol to support a many-to-many, bilateral, multiple-issue negotiation. This protocol is neutral to both service requestors and service providers. Therefore, it can eliminate the decommitment situations arising in one-sided commitment. Commitments among agents can be used to model a work-flow and coordinate their execution of it. I provide methodologies to map an OWL-S model for a workflow to a Colored Petri Net, a graphical and mathematical modeling tool for information processing systems, and then infer commitments and causal relationships from the CPN graph. This methodology enables agents to enact a workflow collaboratively through commitment-based formalisms.)

  • Thesis


  • Authors: Sai Ranga, Prashanth C.;  Co-Author: 2006 (Current heterogeneous meta-computing systems, such as computational clusters and grids offer a low cost alternative to supercomputers. In addition they are highly scalable and flexible. They consist of a host of diverse computational devices which collaborate via a high speed network and may execute high-performance applications. Many high-performance applications are an aggregate of modules. Efficient scheduling of such applications on meta-computing systems is critical to meeting deadlines. In this dissertation, we introduce three new algorithms, the Heterogeneous Critical Node First (HCNF) algorithm, the Heterogeneous Largest Task First (HLTF) algorithm and the Earliest Finish Time with Dispatch Time (EFT-DT) algorithm. HCNF is used to schedule parallel applications of forms represented by directed acyclic graphs onto networks of workstations to minimize their finish times. We compared the performance of HCNF with those of the Heterogeneous Earliest Finish Time (HEFT) and Scalable Task Duplication based Scheduling (STDS) algorithms. In terms of Schedule Length Ratio (SLR) and speedup, HCNF outperformed HEFT on average by 13% and 18% respectively. HCNF outperformed STDS in terms of SLR and speedup on an average by 8% and 12% respectively. The HLTF algorithm is used to schedule a set of independent tasks onto a network of heterogeneous processors to minimize finish time. We compared the performance of HLTF with that of the Sufferage algorithm. In terms of makespan, HLTF outperformed Sufferage on average by 4.5 %, with a tenth run-time. The EFT-DT algorithm schedules a set of independent tasks onto a network of heterogeneous processors to minimize finish time when considering dispatch times of tasks. We compared the performance of EFT-DT with that of a First in First out (FIFO) schedule. In terms of minimizing makespan, on average EFT-DT outperformed FIFO by 30%.)

  • Thesis


  • Authors: Marino, Mark Christopher;  Co-Author: 2006 (Amidst the various forms of electronic literature stands a class of interactive programs that simulates human conversation. A chatbot, or chatterbot, is a program with which users can "speak," typically by exchanging text through an instant-messaging style interface. Chatbots have been therapists, Web site hosts, language instructors, and even performers in interactive narratives. Over the past ten years, they have proliferated across the Internet, despite being based on a technology that predates the Web by thirty years. In my readings, these chatbots are synedochic of the process by which networked identities form on the Internet within the power dynamics of hegemonic masculinity. Chatbots, in this light, model the collaborative performance humans enact on electronically-mediated networks. These computer programs stand as the nexus of various roads of inquiry and present a useful model for gender construction and racial formation enacted over electronically-mediated networks. Chatbots are actor-networks, bringing together programmers, artists, and machines to develop interactive entities. To match their interdisciplinarity, this dissertation brings together humanities, scientific, and sociological approaches to analyze chatbots in their broader historical and cultural context. Particularly, I blend textual analysis, cultural studies, and survey research. Central to the work is a survey of the makers and users of chatbots. Once a sense of the makeup of the community has been determined, subsequent chapters apply race, gender, and labor theories to the interpretation of specific chatbots in action. These interpretations are preceded by a look at Alan Turing, whose provocations about imitating humanity and performing gender set the tone for the debates that surround chatbots. The chatbots in this dissertation are used for websites, interactive fiction, interactive drama, adult entertainment, and educational contexts. From ELIZA to A.L.I.C.E., these chatbots span the history of chatbots, ending contemporary applications, such as Tactical Iraqi, Facade, and my own Barthes ' Bachelorette. In this context, this dissertation enters debates about narratology and ludology, offering directed poetics and systemic exploration in its place. The dissertation also considers other relevant cultural objects, such as the Chess-Playing Turk and cinematic cyborgs appearing in Simone, Thomas est Amoureux (Thomas in Love), and Blade Runner.)

  • Thesis


  • Authors: Agarwal, Sachin Kumar;  Co-Author: 2006 (The unifying theme of this work is to provide new scalable solutions comprising algorithms, protocols, and data-structures, for solving data synchronization and set difference estimation problems. These problems are are repeatedly encountered in distributed systems and solving them efficiently directly affects the scalability of the distributed system, i.e., how many network hosts can participate in the distributed system. Our new solutions, if deployed, can significantly reduce communication, computational overhead, and meta-data stored on hosts as compared to currently used approaches for data synchronization and set difference estimation. Modern distributed network applications often utilize a wholesale data trans¬fer protocol known as "slow sync" for reconciling data on constituent hosts. This approach is markedly inefficient with respect to bandwidth usage, latency, and en¬ergy consumption. We analyze and implement a novel data synchronization scheme (CPlsync) whose salient property is that its communication complexity depends on the number of differences between the PDA and PC, and is essentially independent of the overall number of records. Moreover, our implementation shows that the computational complexity and energy consumption of CPlsync is practical, and that the overall latency is typically much smaller than that of slow sync or alter-native synchronization approaches based on Bloom filters. We also consider the related problem of exchanging two similar strings held by different hosts with a minimum amount of communication. Our approach involves transforming a string into a multi-set of substrings that are reconciled efficiently using the set recon¬ciliation algorithms, and then put back together on a remote host using recent graph-theoretic techniques. We present analyses, experiments and results to show that its communication complexity compares favorably to rsync. We also quantify the possible tradeoff between communication complexity and the computational complexity of our approach. We propose several protocols for estimating the number of differences between sets held on remote hosts. Our approaches are based on modifications of the count¬ing Bloom filter and are designed for minimizing communication and interaction between the hosts. As such, they are appropriate for streamlining a variety of com¬munication sensitive network applications, including data synchronization in mobile networks, gossip protocols and content delivery networks. We provide analytical bounds on the expected performance of our approach, together with heuristic im¬provements and then back our claims with experimental evidence that our protocols can outperform existing difference estimation techniques.)