Abstracts of the talks
Undergraduate Students
- Keila Alfred: A Quantitative Examination of the Role of CD4 T-Cells and CD8 T-Cells on Carcinogenesis
- Erika Alpeter and Bethany Vohlers: Highly Complex Models of Disease Risk
- Joseph Arthur: Relating Steady States of Discrete and Continuous Models in Systems Biology
- Kathryn Ashley and Victoria Sincavage: Ecological Systems, Nonlinear Boundary Conditions, and Σ-shaped Bifurcation Curves
- Marcus Bartlett: On some relations between chemical indices on trees
- Crystal Bennett: CO-Mediated HbS Polymer Melting
- Ted Berkowitz: A Comparison of Methods to Compute Confidence Intervals for Proportions
- G. Crenshaw, S. Gray, J. Park: 3-D Vesicle Membrane Simulation
- Jacqulyn Currie: Tensegrity
- Edwitch Dely, Dai Hyun Kwon: The Effect of Vector-Host Coupling on Vector-Borne Disease Dynamics: Epidemiological Consequences of Vector's Oviposition Dependency on Host-Related Habitats
- Z. Dixon, J. Fisher, M. Scruggs: Optimization of Parameters Estimates For Nonlinear Viscoelastic Models
- Christopher Donaldson: Risk Prediction for Health Screenings- A Data Mining Approach
- Amanda English and Chong Wang: Extension of Grammatical Evolution Decision Trees for Family Trio Data
- Jennifer Figueroa: Non-Medical Use of Stimulant Medication Among College Students: An Optional Randomized Response Technique
- Gerald Gamble: The 411 on the BCS
- Candace Ghent: The relative impact of Protease and Reverse Transcriptase Inhibition
- Jeffrey Gory and Holly Sweeney: Comparison of Internal Model Validation Methods for Multifactor Dimensionality Reduction to Find Complex Models
- Zachary Huntington-Meath, Matthew Jester: Detecting and Modeling the Genes Influenced by Natural Selection in Drosophila ananassae
- John Kelly:The WAY We Learn- an Educational Data Mining (EDM) Approach
- Ethan Lockhart: Efficient Total Variation Minimization for Speckle Image Denoising
- Yao Messan: The Sensitivity Analysis for the Sickle Cell Polymer Melting Model
- Will Milliken: Numerical Confirmation of a Stochastic Model of Taylor Dispersion for all Times with New Experiments
- S. Moffitt, S. Yoon, I. Zeller: Wavelet De-convolution Techniques for Estimating Probability Density Functions
- Hannah Moore and John Patterson: The Effects of Floral Reflectance Plasticity and Induction Temperature in Plantago Lanceolata
- Garland J. Mosteller: Investigating the Impact of the 2011 BBCOR Specifications in the Big South Conference
- John Nardini: A discrete model of iron metabolism in lung epithelial cells with fungal challenge
- Matthew Neal and Andrew Niswander: A Comparison of the Neighbor Joining and Balanced Minimal Evolution Methods
- Nils Nelson: Relative Efficiency of Maximum Partial Likelihood Estimators Under Sampling Schemes
- Olga Stulov: 3D Computational Models of Flagella With and Without Hispid Hairs
- Alice Toms: Applying Algebraic Concepts to Translate Agent Based Models for Efficient Analysis
- Zane Troyer and Chris Ehlman: Evolution of integral transforms and their applications
- Anna Tuck: An Optional Unrelated-Question Randomized Response Model
- Tony Yaacoub: On Cartesian products of graphs and the Roman domination function
- Mariya Bessonova: Spatial models of population growth
- Laura Boemh: Spatial Bridge Distribution for Random Effects in Logistic Regression Models
- Tim Brown: Applying Markov Chain Monte Carlo Model Composition to a Restricted Model Space
- Virginia Burger: Computational Prognosis of Cancer in Barrett’s Esophagus Patients
- William Ely: Pricing European Stock Options using Stochastic and Fuzzy Continuous Time Processes
- Haseeb Kazi: Do Age and Gender Correlate Significantly with Dyslipidemia?
- Liang Kong: Spreading Speeds of KPP Equations with Favorable/Unfavorable Zones
- Junchi Li: The Axelrod model for dissemination of culture
- George Merrill: Evolving Spatial Networks
- Allan Pangburn: A New Algorithm for Maximum Flow Distribution Network: Modified Push Algorithm
- Hwayeon Ryu: Effect of Tubular Inhomogeneities on Feedback-Mediated Dynamics of a Compliant Thick Ascending Limb
- John Steenbergen: Dimension Reduction, Laplacians, and Cheeger Numbers
- Mahadevan Vasudevan: Efficient Community Identification in Complex Networks
- David Vock: A Fast Computational Approach to Implement Flexible Random Effects Densities for Generalized Linear and Nonlinear Mixed Models
- Tharanga Wickramarachchi: Arc length - A New Approach to Measure the Risk
Graduate Students
A Quantitative Examination of the Role of CD4 T-Cells and CD8 T-Cells on Carcinogenesis
Keila Alfred, NC A&T State University, Greensboro, NCmentored by Dr.Nicholas Luke
Abstract: Rapid detection and treatment of tumors are important factors in decreasing the mortality rate in the world. For years, cancer immunologists assumed that immune system cells could not recognize tumor cells. However, the role that the immune response plays in cancer prevention or treatment is not yet fully understood. In this study, a mathematical model adapted from previous studies is used to investigate the roles of CD4- and CD8- T-cells. Studies have suggested that CD4 T-cells have a greater inhibitory effect on carcinogenesis, while CD8 T-cells may promote carcinogenesis. The mathematical model is adapted to reflect these hypotheses, and model simulations are compared to published experimental data. Additionally, a formal sensitivity analysis is conducted on the model.
Highly Complex Models of Disease Risk
Erika Alpeter and Bethany Vohlers, NC State University, Raleigh, NCmentored by Dr. Alison Motsinger-Reif and Dr. David Reif
Abstract: As genetic association mapping rapidly evolves, new insights in the field recognize complex human traits as more heavily impacted by higher-order models of disease risk than initially assumed, attributing dozens, or even hundreds, of genetic variants to disease etiology. With advanced statistical and computational modeling, these genetic variants may be applied as clinical predictors. However, the high-dimensionality of these models confronts both traditional and modern data-mining approaches with important challenges with respect to variable selection and model identification. Whereas new approaches including Multifactor Dimensionality Reduction (MDR) have been tested with as many as five disease-associated genes, little is known regarding its performance with higher-dimensional risk models. Through the use of simulations, MDR's statistical integrity with high-dimensional risk models will be evaluated, and the sample sizes needed to model a range of these high-order effects will be empirically estimated. The use of other classifiers, including traditional statistical approaches such as logistic regression, will be evaluated in parallel and compared to the results of MDR. This study will utilize processing power at NCSU's High Performance Computing (HPC) center to allow feasible implementations of these computationally-intensive empirical comparisons.
Relating Steady States of Discrete and Continuous Models in Systems Biology
Joseph Arthur, NC State University, Raleigh, NCmentored by Dr. Alan Veliz-Cuba
Abstract: We examine discrete multi-state network models and continuous ODE models for gene regulatory networks. Wittmann et al. have given an algorithm to transform a subclass of multi-state network models into ODE models that preserve steady state behavior. We generalize their work to include all multi-state networks and apply our algorithm to a model for T helper cell differentiation.
Ecological Systems, Nonlinear Boundary Conditions, and Σ-shaped Bifurcation Curves
Kathryn Ashley and Victoria Sincavage, Clemson University, Clemson, SCmentored by Dr. Ratnasingham Shivaji
Abstract: We examine a one dimensional reaction diffusion model with a weak Allee growth rate that appears in population dynamics. Noteworthy, we combine grazing along with a certain nonlinear boundary condition that models negative density dependent dispersal on the boundary and analyze the effects on the steady states. In particular, we examine the bifurcation curves of positive solutions as the grazing parameter is variegated. Our results are acquired through the adaptation of a quadrature method and Mathematica computations. Specifically, we computationally ascertain the existence of Σ-shaped bifurcation curves with at least twelve positive steady states for a certain range of the grazing parameter.
Evolution of integral transforms and their applications
Zane Troyer and Chris Ehlman, Clemson University, Clemson, SCmentored by Dr. Irina Viktorova
Abstract:The research is being conducted by the cross disciplinary undergraduate team of the Creative Inquiry Class on the evolution of integral transforms and their applications in scientific and technological fields. This class is designed as a fast paced, research style class which offers deep insight into the background, mathematical theory, and engineering applications of several advanced calculus transforms. The class was divided into three groups, one for each of the topics, and each group gave a presentation about their part of research on the various integral transforms. These presentations let the students act as the experts as they fielded any questions that the other students in the class might have. The rich historical background has been researched and presented by the history group of students who would initiate the seminar's presentations on each of the investigated integral transforms. The mathematical theory group had a few weeks per each of the researched transforms to learn, on their own, covering the mathematical concepts involved with the transform, its mathematical properties, its limitations, and how to use it to solve some application problem. The applications group did some creative research on the traditional as well as new and surprising at a time implementations of covered by the theoretical group types of integral transforms. Therefore, the overall research of all three groups was unified and relevant. Besides looking in depth at several traditional and new contemporary integral transforms, such as Laplace, Fourier, Radon and Wavelets ( individual and team research on history, origination, mathematical fundamentals and wide range of applications of these transforms), the research projects involving the direct implementation of Wavelet and Fourier as well as LaplaceTransform is being conducted. The first research project covers the application of Fourier and Wavelet Transforms for the statistics of random noise and explores the approach to estimate the probability density function, from the data affected by random noise factors. The second project focuses on the optimization techniques for finding the best parameter estimates in modeling of viscoelastic composite structural materials.
On some relations between chemical indices on trees
Marcus Bartlett, Clayton State University, Morrow, GAmentored by Dr. Elliot J. Krop
Abstract: The Wiener index of a graph G is defined to be the sum of distances between every pair of vertices of G. When G is a k-ary tree, Hua Wang found a surprising relation between this index and the sum of distances between every pair of leaf vertices of G (called the gamma index) and showed a counterexample for another conjectured functional relationship. In this talk, we define two new natural indices (the spinal index and the Bartlett index) which when summed with the gamma index above, yield the Wiener index. We then show analogous relations to that of Wang, produce a counterexample to a functional relation for the spinal index, and state a conjecture about the Bartlett index.
CO-Mediated HbS Polymer Melting
Crystal Bennett, North Carolina A&T State University, Greensboro, NCmentored by Dr. Liping Liu and Dr. Catherine D. White
Abstract: Sickle cell anemia is a disorder caused by a mutation in DNA that replaces the nucleic acid Glutamic with Valine. This replacement causes a change in the characteristics of hemoglobin that allows the monomers, the simplest units of chemically binding molecules, to stick together. These chains of monomers, called polymers, distort the shape and properties of the red blood cell. The malformed cells do not efficiently pass through capillaries or transport oxygen to the body’s tissues. In order to make these cells more effective, the polymers must be broken apart. The process of breaking polymers apart is called melting. In the referenced study, the melting was induced by immersing the polymers in a buffer solution containing carbon monoxide. The mathematical model of this process was produced in a separate study. The purpose of this paper is to analyze and reproduce the current mathematical model using various computational and numerical tools.
Spatial models of population growth
Mariya Bessonova, Duke University, Durham, NCmentored by Dr. Richard Durrett
Abstract: The quadratic contact process is an interacting particle system that can be interpreted as a simple model for the growth of a population. We label each vertex of a graph: 1 if the vertex is occupied by a particle and 0 if it is vacant. This system evolves randomly in time. An adjacent pair of particles gives birth to a new particle at a fixed birth rate, and particles die at some fixed death rate. We will address the question: does the population die out or does it reach some nontrivial equilibrium distribution?
A Comparison of Methods to Compute Confidence Intervals for Proportions
Ted Berkowitz, Elon Universitymentored by Dr. Kirsten Doehler
Abstract: Computing confidence intervals for proportions is standard in numerous disciplines. The traditional method of computing a confidence interval for a proportion that is mentioned in many introductory statistics textbooks and covered in introductory statistics classes is the simple asymptotic method. Also known as the Wald method, this interval has inadequate and inconsistent coverage probabilities which can fall noticeably below the nominal level, even when point estimates are near 0.5 and sample sizes are very large. However, numerous alternatives to the simple asymptotic method are available, including some that are not too computationally complicated for those new to statistics. We have performed a simulation study to observe the coverage probabilities and interval lengths of several different confidence interval methods for proportions including the modified Agresti-Coull interval, the Wald interval using a t distribution value, the Clopper-Pearson “exact” interval, and Wilson’s score interval. We also discuss the important differences in meaning for a desired level of confidence of an interval.
Spatial Bridge Distribution for Random Effects in Logistic Regression Models
Laura Boehm, NC State University, Raleigh, NCmentored by Dr. Brian Reich and Dipankar Bandyopadhyay
Abstract: Spatially-referenced binary data are common in epidemiology. Owing to the elegant log-odds interpretation of the regression coefficients, a natural model for these data is a logistic regression. To account for missing confounding variables that may have a spatial pattern, such as those relating to local socioeconomic or environmental conditions, it is common to include a Gaussian spatial random effect. Conditioned on the spatial random effect, the coefficients may be interpreted as log odds ratios. However, marginally over the random effects, the coefficients are no longer log odds, and the estimates are hard to interpret and generalize to other spatial regions. To resolve this issue, we propose a new spatial random effect distribution which ensures that the regression coefficients maintain the log-odds interpretation both conditional on and marginally over the spatial random effects. We test this model in a simulation study, and apply the new methodology to dental health data for an under-served South Carolina population. The method is flexible enough to handle areal or geostatistical approaches, and hierarchical models with multiple random intercepts, for example, both subject and site random effects.
Applying Markov chain Monte Carlo model composition to a restricted model space
Tim Brown, University of North Carolina Wilmingtonmentored by Dr. Susan Simmons
Abstract: For statisticians, model selection is essential in a variety of fields in research as well as aiding in the process of finding which components or variables of a predictor are related to a response. With model search algorithms, there are usually more observations than features; however, there are also many instances where this is not true. This dilemma is called the P>>N Problem. We try to solve this dilemma by incorporating a Monte Carlo Markov Chain Model Comparison algorithm. We use this algorithm in a Bayesian hierarchical model. Previous work has been done with both this model and algorithm; but, in this case, we shall use a restricted space search procedure.
Computational Prognosis of Cancer in Barrett’s Esophagus Patients
Virginia Burger, University of Pittsburgh, Pittsburgh, PAmentored by Dr. Chakra Chennubhotla
Abstract: Barrett’s Esophagus with no dysplasia leads to esophageal adenocarcinoma in 0.1% of patients per year. However, as dysplasia occurs and increases, this risk increases from 0.5% to 10% for patients with high-grade dysplasia. The sooner dysplasia is diagnosed, the sooner treatment can begin with the intention of preventing the progression from dysplasia to cancer. We look at biopsies of Barrett’s esophagus patients with diverse levels of dysplasia over a five-year period in order to determine a computational prognosis method for dysplasia and cancer risk. The biopsies are labeled with biomarkers, which we measure on individual nuclei.
A typical tissue biopsy in our data set is 6000x6000 pixels and contains around 10,000 nuclei, making nuclei segmentation the major task in this project. As the biopsy images have varied intensities across image regions, intensity thresholding for nuclei detecting is not robust. We propose a semi-supervised labeling method in which user-defined labels are propagated onto the tissue image using eigenfunctions. The graph Laplacian corresponding to an image defines a smoothness operator on the image; its eigenvectors can be used to smoothly partition the image based on some known labels. However, the graph Laplacian of an image with n pixels is an n by n matrix, thus making efficient eigenvector computation infeasible. Instead, we use the eigenfunctions of the graph Laplacian, which approximate the eigenvectors of the graph Laplacian as the number of pixels approaches infinity and can be computed rapidly. Because eigenfunctions are better able to partition the data wh en provided with a large set of labeled points, we use spectral embedding to enlarge the set of user-labeled points before applying eigenfunctions to propagate the labels onto the entire image.
Our semi-supervised model for image segmentation provides efficient segmentation of enormous images. In combination with cancer biomarkers on the tissues, we seek to develop a feature set for dysplasia and cancer prognosis in Barrett’s esophagus patients.
3-D Vesicle Membrane Simulation
Gray Crenshaw, Stephen Gray, Jung-Wook Park, Fordham University, Bronx, NYmentored by Dr. Rolf Ryham
Abstract: We have developed a numerical code for simulating fully three-dimensional vesicle membranes. We will test candidate mathematical models of membranes and generate pictures and movies for comparison with microscopy experiment of red blood cells. Membrane shape changes as a dynamic process based on physical principles, the Navier-Stokes fluid flow equations, and phase field energy. Through our simulations, we will determine the strength of the shear flow conditions that compromise the integrity of the cell membrane. Detailed pictures of membrane geometry will become well understood, as we work to program three-dimensional models and adequately demonstrate natural phenomena. One beneficial application of this simulation is in the field of hematology. Since hematology deals with the behavior of red blood cells, the numerical experiment provides quantitative and visual data, which is challenging to extract experimentally. This project will become an important computational tool for understanding membrane fusion.
Tensegrity
Jacqulyn Currie, NC A&T, Greensboro, NCmentored by Dr. Donna Monlinek
Abstract: My recent research was done on Tensegrity, a term coined by Buckminster Fuller after he studied the sculptures of Kenneth Sneslon. Tensegrity Structures consists of rods, cables, vertices; the ends of the rods. Tensegrity structures stabilize themselves because of the way in which tensional and compressive forces are distributed and balanced within the structure; that is, the sum of the forces at each vertex is zero. The cables within the structure keep vertices close together and the rods hold them apart. Two vertices connected by a cable may be as close together as desired but they may never be farther apart than the length of the cable joining them. Two vertices joined by a rod may never be closer than the length of the rod, but may be arbitrarily far apart. The human skeleton is an example of a tensegrity structure. Our skeleton is pulled up against the force of gravity and stabilized in a vertical form by the pull of tensile muscles, tendons and ligaments.
Biologists have also studied tensegrity to understand cellular behavior. From their research cells get their shape from tensegrity, most cells derive their structure from the cellular matrix, an anchoring scaffolding to which cells are naturally secured in the body. Inside the cell a gossamer network of contractile microfilaments, an element of the cytoskeleton, extends throughout the cell exerting tension. It pulls the cell's membrane and all its intenal constituents towards the nucleus at the core. The cytoskeleton is surrounded by membranes and penetrated by viscous fluid, it is this hard-wired network of molecular struts and cables that stabilizs cell shape. My work focused on tensegrity was finding the configurations and lengths of the components of tensegrity structures in order to build them.
The Effect of Vector-Host Coupling on Vector-Borne Disease Dynamics: Epidemiological Consequences of Vector's Oviposition Dependency on Host-Related Habitats
Edwitch Dely and Dai Hyun Kwon, UNCGmentored by Dr. Gideon Wasserberg and Dr. Cliff Smyth
Abstract: Modeling vector-borne diseases, in particular, pathogens transmitted by blood-sucking arthropods, presents the challenge of determining which factors influence the invasion or persistence of the disease agent in a susceptible host population. Particularly, the significant role of the host is often neglected when modeling vector-borne disease dynamics. Vectors need host as a source of blood-meal for egg-production as well as a source of habitat. Our study focused on the effect of disease dynamics due to the vector's dependence on its host (hereafter termed vector-host coupling). Specifically, we investigated the epidemiological consequences of the vector's population dynamics due to the availability of host's habitat as breeding sites. We used an object-oriented programming approach in MATLAB to simulate three vector-host coupling scenarios; uncoupled, loosely-coupled, and coupled. The uncoupled scenario is a hypothetical system where contact between vector and host is random. The loosely-coupled scenario models vectors that depend on the host mainly for obtaining blood-meals (e.g., mosquitoes).The coupled scenario models vectors that depend on the host as source of habitat (e.g., ticks, fleas, mites, etc.). We also performed meta-analysis to validate predictions of our model with respect to patterns published in scientific journals. In contrast to conventional models that expect inverse relations, we observed a positive relationship between host abundance and infection prevalence for both the loosely-coupled and coupled systems. In the latter, however, a decrease in infection prevalence was observed at higher host abundance. The meta-analysis reported a positive association in most cases between host number and disease prevalence.
Optimization of Parameters Estimates For Nonlinear Viscoelastic Models
Z. Dixon, J. Fisher, M. Scruggs, Clemson University, Clemson, SCmentored by Dr. Irina Viktorova
Abstract: The Volterra theory of heredity finds its applications in various branches of mathematical physics. The presented methodologies are based on the nonlinear hereditary type relationship between stresses, strains and time in visco-elastic solids--materials with memory. This relationship can be modeled by the second type of Volterra's equation. It has been shown that the Volterra's equation of second type can successfully model the wide range of materials tested including polymers, composites, and metals. As the rate of loading increases, the stress-strain diagram more closely matches the model. This model provides an upper bound for the whole region of possible deformation of the material under consideration. The choice of kernel associated with the use of the aforementioned Volterra's equation is the subject of several objective considerations. Physical and mathematical adequacy are the dominant ones. The use of the exponential of arbitrary order function as kernel is the most general type to satisfy the above considerations. This research presents two methodologies for obtaining parameter estimates associated with the use of this kernel. In this talk, we will also present experimental data from creep and quasi-static loading tests that verify the described approaches.
Risk Prediction for Health Screenings - A Data Mining Approach
Christopher Donaldson, Rowan University, Glassboro, NJmentored by Dr. Umashanger Thayasivam
Abstract: Health screening data is an important area in the process of analyzing the overall health and wellness of an area. This article focuses primarily on the blood pressure and blood sugar values of those who participated in the screenings and investigates the effects that such factors as age, location, ethnicity, or gender may have on such values. More specifically, those who are considered "high risk" in terms of their blood pressure or blood sugar value are emphasized so that it may be discovered what leads to this "high risk" classification. In analyzing these values and trends we were also able to determine if the health screenings were having a positive or negative effect on the individuals who participated in them from year to year. These issues were primarily explored with the use of modern statistical data mining techniques, namely, classification trees and prediction profilers with logistic regression, as well as some other less rigorous methods.
Pricing European Stock Options using Stochastic and Fuzzy Continuous Time Processes
William Ely, UNCGmentored by Dr. Jan Rychtář
Abstract: Over the past 40 years, much of mathematical finance has been built on the premise that stocks tend to move according to continuous-time stochastic processes, particularly Geometric Brownian Motion. However, fuzzy set theory has recently been shown to hold promise as a model for financial uncertainty as well, with continuous-time fuzzy processes used in place of Brownian Motion. Like Brownian Motion, fuzzy processes also cannot be measured using a traditional Lebesque integral. This problem was solved on the stochastic side with the development of Ito's calculus. Likewise, the Liu integral has been developed to measure fuzzy processes. In this paper I will describe and compare the theoretical underpinnings of these models, as well as "back-test" several variations of them on historical market data. These results will be analyzed to identify trends and differences between and within the models."
Extension of Grammatical Evolution Decision Trees for Family Trio Data
Amanda English and Chong Wang, NC State University, Raleigh, NCmentored by Dr. Alison Motsinger-Reif
Abstract: With today's advanced genotyping technologies, the number of genetic variants per individual that are available for disease-mapping studies is exponentially increasing, posing an important computational problem. Current analytical methods are computationally infeasible in the face of the combinatorial explosion created when considering complex genetic models in high-dimensional datasets generated by these new technologies. Evolutionary computation approaches have shown promise in addressing such high-dimensional combinatoric problems. However, these have largely been applied only to genetic data on unrelated individuals (i.e. case-control data). In this study, an evolutionary computation method that uses grammatical evolution to evolve decision trees (GEDT) will be extended to consider trios, in which disease cases and their respective parents are collected for gene-mapping. Using previously-developed simulation software, we will evaluate the ability of GEDT to identify disease-associated loci in trio data and characterize its performance across a range of complex models. This study will be implemented using NCSU’s super-computing cluster and result in distributable software for these cutting-edge methods.
Non-Medical Use of Stimulant Medication Among College Students: An Optional Randomized Response Technique
Jennifer Figueroa, UNCGcoauthored by Anna Tuck, UNCG
mentored by Dr. Sat Gupta and Dr. May Crowe
Abstract: This study tests the efficacy of a statistical method called randomized response technique (RRT), which has been found to be effective in reducing response bias in studies involving questions that are sensitive in nature. The sensitive question of interest for us was the non-medical use of stimulant medication. The main focus of our project was on improving the original unrelated question model by Greenberg by allowing the respondent to answer the stimulant misuse question truthfully if that question was considered non-sensitive, or answer the question using the unrelated question model if the question was deemed sensitive. Surveys were administered to a random sample of 550 undergraduate college students at the University of North Carolina at Greensboro. The check-box confidential survey was given to 150 participants, the face-to-face question survey was given to 150 participants, and the Optional RRT survey was given to 250 participants. The efficacy of the proposed Optional RRT method was checked by comparing the results of the RRT survey with the methods of the check-box confidential response, and the face-to-face question method.
The 411 on the BCS
Gerald D. Gamble, North Carolina A&T State University , Greensboro, NCmentored by Dr. Dominic Clemence
Abstract: In my presentation, I will discuss and analyze some of the growing pains of the BCS, and how the computer and human pollsters make up the ranking system we use today. As there has been increasing demand for a formal playoff system, a workable formula for including the Bowls may never be reachable.
The relative impact of Protease and Reverse Transcriptase Inhibition
Candace Ghent, North Carolina A&T State University , Greensboro, NCmentored by Dr. Gregory Gibson and Dr. Gregory Goins
Abstract: In this research, we will study the impact of the human immunodeficiency virus (HIV) with the levels of reverse transcriptase inhibitors (RTI) and protease inhibitors (PI) in order to produce results with the maximum amount of CD4+ cells and viral suppression. We will analyze the relative impacts of RTI and PI on these aspects. This research will then further develop our model and our equations to account for CD4+ cells produced by stem cells that have been genetically engineered to resist the HIV infection. We will look at the difference between the presence of these immune cells versus the amount of drug needed to suppress the HIV virus. We will model with systems of ordinary differential equations using MATLAB programming, more specifically SimBiology. Simulations will be done using the Runge-Kutta Method in Microsoft Excel.
Comparison of Internal Model Validation Methods for Multifactor Dimensionality Reduction to Find Complex Models
Jeffrey Gory and Holly Sweeney, NC State University, Raleigh, NCmentored by Dr. Alison Motsinger-Reif
Abstract: Determining the genes responsible for complex human traits can be challenging when the underlying genetic model takes a complicated form, such as genetic heterogeneity (in which different genetic models can result in the same trait) or epistasis (in which genes interact with other genes and the environment). Multifactor Dimensionality Reduction (MDR) is a widely used method that effectively detects epistasis; however, the presence of genetic heterogeneity can confound the standard cross-validation procedure used for internal model validation. Cross-validation allows for only one “best” model and is therefore inadequate when more than one model could cause the same trait. We hypothesize that an alternative internal model validation method, the three-way split, will be better at detecting heterogeneity models. To test this, we will simulate genetic data that exhibits heterogeneity, implement MDR with each of the two internal model validation methods, and then com pare the results. The simulated datasets will be based on a variety of heterogeneity models (covering a range of heritabilities and penetrance models) so that the relative performance of the two internal model validation methods can be evaluated across an array of situations. These methods will be evaluated using empirical power calculations across the various datasets. Our results will be used to characterize the situations wherein each of the two internal model validation methods is most appropriate.
Detecting and Modeling the Genes Influenced by Natural Selection in Drosophila ananassae
Zachary Huntington-Meath, Matthew Jester,UNCGmentored by Dr. Malcolm Schug, Dr. Roland Deutsch and Dr. David Remington
Abstract: Evolutionary biologists are interested in detecting genes that are targets of natural selection in the genome of organisms in natural populations. Many factors affect the ability to detect natural selection including, genetic drift, migration, mutation, and recombination. We are currently developing a mathematical model using a coalescent approach to identify genes that have been targets of natural selection. The coalescence model predicts the time at which two or more loci have a common ancestor. This model will give us a genealogy of a certain number of alleles at each locus that we will use to generate a neutral genealogy. We will compare data from natural populations with our neutral model to generate a neutral section has affected a specific region of the genome. To test the strength of the model and determine appropriate parameter values for genetic drift, natural selection, migration, and mutation, we will use molecular markers called microsatellites distributed across the genomic region around the furrowed gene in D. ananassae which was previously shown to be a target of natural selection. We will present the genome markers and parameters of the model.
Do Age and Gender Correlate Significantly with Dyslipidemia?
Haseeb Kazi, St. Lukes Hospital, Bethlehem, PAmentored by David W. Leh, MD
Abstract: Dyslipidemia increases morbidity and mortality related to coronary heart disease (CHD). ATP III Guidelines and Framingham scores determine the risk of CHD, and coronary risk factors are presumed to be related to advanced age. The American Heart Association recommends obtaining a lipid panel for any individual over the age of 20. Based on the observed increase in the multitude of comorbidities in younger individuals, we hypothesize that increase in age does not directly correlate with dyslipidemia (and subsequent CHD).
Data were obtained by retrospective review of 150 randomly selected admissions with the primary diagnosis of chest pain. Lack of lipid profile or chest pain as a secondary diagnosis was exclusion criteria. Data obtained from the Internal Medicine residency patient lists at St. Luke's Hospital, Bethlehem, PA, from 2006 to 2008, was recorded and compared with the ATP III Guidelines to classify risk factors and determination of abnormal lipid profiles. Some demographic details (age, gender etc.) were also obtained. Correlation of age and gender with dyslipidemia was examined by fitting a binary logistic regression model. A one-sided p-value of 0.095 for age and .035 for gender suggests that age may not significantly correlate with dyslipidemia (and subsequent CHD) but gender may. The odds ratios of 0.52 (gender) and .99 (age) indicate that women had a 48% reduced risk of dyslipidemia as compared to men, and that the risk goes down by 1% with each additional one year in age.
Conclusions: Advanced age fails to demonstrate increased prevalence of dyslipidemia. Gender had a far greater correlation with dyslipidemia with men at a significantly larger risk. Interestingly, the eldest patients (age > 72) had the lowest prevalence of dyslipidemia. This may be indicative of better personal care or optimal pharmacologic treatment for these patients. Patient education and strict treatment regimens should be enforced early on to prevent adverse events in all patients with dyslipidemia.
The WAY We Learn- an Educational Data Mining (EDM) Approach
John Kelly, Rowan University, Glassboro, NJmentored by Dr. Umashanger Thayasivam
Abstract: Data mining is an emerging field that is on the border between statistics and computer science. The techniques of data mining are being used in many different fields to explore large data sets in an effort to find hidden patterns that using traditional methods may miss. In this research study, data mining and statistical techniques were used to explore a data set of LCI (Learning Connections Inventory) scores. Focusing on the scores of engineering majors who have successfully graduated from college, this research attempts to explore the difference between the majors, gender and GPA in order to identify the effective student learning patterns. The goal of this study is to advance our understanding of how students learn and provide a foundation for developing strategies to improve student success across the spectrum of students.
Spreading Speeds of KPP Equations with Favorable/Unfavorable Zones
Liang Kong, Auburn University, Auburn, ALmentored by Dr. Wenxian Shen
Abstract:We will study a spatial spreading dynamics of mono-stable equations with random dispersal in spatially periodic habitats; and in particular, the existence and characterization of spreading speeds is considered. Random dispersal is essentially a local behavior which describes the movement of organisms between adjacent spatial locations. The speed of the movement is characterized by the speed of traveling wave solution of the corresponding second order parabola equation. Using a principal eigenvalue theory for random dispersal operators with space periodic dependence it is shown that the mono-stable equation with random dispersal has a spreading speed in every direction.
The Axelrod model for dissemination of culture
Junchi Li, Duke University, Durham, NCmentored by Dr. Rick Durrett
Abstract: Built in 1997 by social scientist R. Axelrod, the model is a type of interacting particle system similar to the voter model which accounts for homophily. Each vertex of the network of interactions is characterized by a fixed number of cultural features. Pairs of adjacent vertices interact at a rate proportional to the number of features they share, resulting in the interacting pair having one more cultural feature in common. Numerical results in the past ten years suggest that (i) when the number of cultural features and the number of states per feature both equal to two or (ii) when the number of features exceeds the number of states per feature, the system converges to a monocultural equilibrium in the sense that a single culture ultimately occupies a large fraction of the graph while (iii) when the number of states per feature exceeds the number of features, the system freezes in a highly fragmented configuration. We will focus on sketching an analytical proofs of the conjectures if they are available.
Efficient Total Variation Minimization for Speckle Image Denoising
Ethan Lockhart, North Carolina State University, Raleigh, NCmentored by Dr. Hyeona Lim
Abstract: Image denoising is an important image processing procedure for various real world applications. It is often necessary as a pre-processing for other imaging techniques such as segmentation and zooming. Chambolle has produced a quick dual approach algorithm that minimizes the total variation norm for image denoising. However, this algorithm is intended for images with synthetically added Gaussian noise only. We develop a new denoising method for natural speckle noise images based on the Chambolle algorithm. We enhance the new method using central difference methods for computational accuracy and texture free residual (TFR) parameterization to preserve textures and fine structures. Our computational results compare favorably to the original Chambolle algorithm and other conventional denoising methods.
Evolving Spatial Networks
George Merrill, UNCGmentored by Dr. Jan Rychtář
Abstract: We model networks as individuals positioned in the plane. Rules to update the network are based on the payoff matrix of a two-player Prisoners' Dilemma game. We analyze graph properties of the resulting network including the clustering coefficient, degree distribution, and average edge weight.
The Sensitivity Analysis for the Sickle Cell Polymer Melting Model
Yao Messan, NC A&T State University, Greensboro, NCmentored by Dr. Liping Liu
Abstract: In this study we investigate the carbon monoxide mediated sickle cell polymer melting model. A system of ordinary differential equations is used to describe the sickle cell depolymerization dynamics. We study different dynamics: a general case and two extreme cases where there is an absence of carbon monoxide (CO) and an abundance of carbon monoxide. We conduct the sensitivity analysis on the CO and the melting/binding parameters to study the behavior of the HbS monomers and polymers. The simulations are conducted by using various numerical schemes including Euler, Runge-Kutta and Non-Standard Finite Difference methods.
Numerical Confirmation of a Stochastic Model of Taylor Dispersion for all Times with New Experiments
Will Milliken, UNC, Chapel Hill, NCmentored by Dr. Keith Mertens
Abstract: The research presented is the numerical verification of a stochastic model of Taylor dispersion developed by McLaughlin, Lin, and Camassa, measuring concentration along the y-average and along partial planar slices. This model holds for all times, extending Taylor’s macroscopic, deterministic treatment of pipe flow which only holds for very long times. Monte Carlo simulations have verified the analytic solution to variance and have verified agreement among the various partial averages to the cross-sectionally averaged theories. These are some of the first experiments being conducted since Taylor’s to compare the theoretical work that has been written. New techniques and a new apparatus have been developed to verify the analytic solution experimentally. Preliminary experimental results successfully match Taylor’s theory, i.e. variance converges after one Taylor timescale.
Wavelet De-convolution Techniques for Estimating Probability Density Functions
S. Moffitt, S. Yoon, I. Zeller, Clemson University, Clemson, SCmentored by Dr. Irina Viktorova
Abstract: Estimating probability density functions is a valuable but difficult task because of the infinite dimensional nature of functions and the large data sets which are required. It becomes even more problematic in situations where the observed data is not pure but is contaminated with random noise. Fourier and wavelet transforms have been used in the de-convolution of noisy data in recent results. To make the research highly applicable, various sizes of experimental data must be generated to optimize the selection parameters of the wavelet PDF estimator. This will allow sets of convoluted data with an unknown distribution to be analyzed by sampling the data of various sizes. If the appropriate estimator is being used, a common convergence to a PDF will be observed when using the selection parameters for the varying sample sizes. Achieving this convergence to the targeted PDF is the major focus of the project. Numerical results and graphs will be presented to show how the probability density functions can be recovered from convoluted random data.
The Effects of Floral Reflectance Plasticity and Induction Temperature in Plantago Lanceolata
Hannah Moore and John Patterson, UNCGmentored by Dr. Elizabeth Lacey and Dr. Scott Richter
Abstract: When an organism modifies its phenotype in response to changes in the environment, the organism exhibits phenotypic plasticity. One phenotypically plastic species, Plantago lanceolata, shows a noticeable difference in the flower color. The color difference represents varying rates of reflectance in the spike. The temperature at which spikes are induced determines the level of reflectance. Previous experiments demonstrate that spikes induced in low temperature always had a higher internal temperature than those induced in warm temperature. In a natural environment, additional variables including external temperature, solar radiation and wind velocity would be present and could affect internal spike temperature. Our first project seeks to determine the persistence of differences in internal spike temperatures from cold-induced and warm-induced plants in a natural environment. Preliminary analysis suggests external temperature has the greatest effect on internal spike temperature. A previous study examining pollen grain size, pollen germination and internal spike temperature found that internal spike temperature affects male reproductive success. Our second project expands on these results to examine the effect of temperature and floral reflectance plasticity on male reproductive success. We consider this effect in regards to amount of pollen grains produced and observed probability of germination. Preliminary analysis shows that germination temperature has the greatest effect on the probability of germination. This illustrates the global significance of understanding floral reflectance plasticity with consideration to the ways in which our biolife will adapt to global warming.
Investigating the Impact of the 2011 BBCOR Specifications in the Big Sout h Conference
Garland J. Mosteller, Presbyterian College, Clinton, SCmentored by Dr. C. Clint Harshaw
Abstract: Since 1999, the NCAA's baseball bat performance standard used the Ball Exit Speed Ratio(BESR). The BESR certification represented the ratio between the speed of the ball coming off the bat relative to the ball's pitched speed and bat's speed. New technology in baseball bat production allowed increased performance as bats were broken in with use. In 2009, the NCAA found bats which initially met the BESR certification standard, failed the standard with extensive use. That is, as bats was used, they became more lively – and indeed injurious – to the point they out-performed the BESR certification standard. Offensive production was excessively high, pitchers (and other infielders) were at a high risk of injury due to the shorter reaction time. In 2009, the NCAA imposed a moratorium on composite bats. This set in motion a revision of the certification standard, known as Bat-Ball Coefficient of Restitution (BBCOR). The BBCOR certification standard was introduced throughout all of collegiate baseball in 2011, and will be introduced among high school baseball beginning in 2012. This research examines the impact of the BBCOR certification standard in the Big South Conference.
A discrete model of iron metabolism in lung epithelial cells with fungal challenge
John Nardini, NC State University, Raleigh, NCmentored by Dr. Reinhard Laubenbacher
Abstract: Iron is essential for the growth and survival of the cells in our body as well as the pathogens attacking them. As such, mammalian cells have developed complex mechanisms of both regulating their iron stores and withholding iron from microbial invaders. In particular, lung epithelial cells are a prime target for fungal infection because of constant exposure to airborne pathogens. Upon fungal introduction into the airway, an innate immune response is initiated to combat the pathogen. The ensuing struggle is a battle for iron, with the host triumphing if it can deprive the fungus of enough iron and the fungus winning if it can overcome the iron deficiency induced by the host’s immune proteins. This paper presents a logical model of iron metabolism in lung epithelial cells exposed to proinflammatory cytokines and the fungi Aspergillus fumigatus and Alternaria alternata. It makes predictions about the way in which lung epithelial cells sequester excess extracellular iron, along with how internal iron is stored and released from the cell. Additionally, it allows for the testing of conditions that are experimentally intractable, a process beneficial to many fields, as novel interactions and relationships can be explored without laboratory experimentation.
A Comparison of the Neighbor Joining and Balanced Minimal Evolution Methods
Matthew Neal and Andrew Niswander, Winthrop University, Rock Hill, SCmentored by Dr. Joseph Rusinko
Abstract: Many endeavors in bioinformatics require the construction of phylogenetic trees. One major obstacle to such undertakings is the fact that there remains discrepancy for the best algorithm to use in these constructions. Although phylogenetic trees constructed on the basis of sequenced nucleotides or amino acids provide insight to evolutionary relationships, these trees may not accurately represent the species’ true history. This work analyzes two distance-based phylogenetic methods of tree construction – T he Neighbor Joining and Balanced Minimal Evolution Methods. We discuss the Combinatorics of the 5 taxa trees of each method and move on to a comparison of the NJ and BME methods in the construction of the history for the platypus (O. anatinus) . We then discuss possible issues with each method such as convergent evolution and the effect of rogue taxa.
Relative Efficiency of Maximum Partial Likelihood Estimators Under Sampling Schemes
Nils Nelson, Utah State University, Logan, UTmentored by Dr. Haimeng Zhang
Abstract: Cox's regression model is widely used in epidemiology and medical research to assess the influence of exposure variables and other covariates on mortality or morbidity. Such study and analysis often requires to collect a large cohort of subjects over a long period of time. Sampling schemes, which only process the raw covariate data on a small portion of sampled subjects, not only offer substantial savings, but ultimately become the only practical alternative. In this project, we compare the performance of the so-called maximum partial likelihood estimator from two popular sampling schemes, the case-cohort sampling design and the nested case-control sampling design, along with that from the full cohort under finite sample through extensive numerical simulations. This comparison is then applied to the analysis of a real data set. Finally, we investigate the relative efficiency of the nested case-cohort sampling design under highly stratified models.
A New Algorithm for Maximum Flow Distribution Network: Modified Push Algorithm
Allan Pangburn, University of North Carolina Wilmingtonmentored by Dr. John Karlof
Abstract: Over the years researchers and programmers have created and revised algorithms to solve maximum flow network (MFN) problems. These problems contain: source node(s), transshipment nodes , sink node(s), and arcs with limited capacities. The objective is to send the maximum amount of flow from the S-nodes, through the O-nodes to reach the T-nodes. A variation of MFN is a maximum flow distribution network (MFDN) problem. These problems contain the same nodes as MFN, but with additional nodes called distribution nodes. These nodes have only one flow entering and multiple flows leaving, but the leaving flows are proportion to the entering flow. In this presentation, we present a new method to determine an initial feasible flow by revising: Goldberg and Tarjan's algorithm in 1988, and Sheu, Ting, and Wangs algorithm in 2006. Major revisions include: defining a pre-determined search order, resetting capacities on arcs, and two formulas to lessen the amount of excess flow. We also determine the maximum flow by incorporating some the ideas in Sheu, Ting, and Wang's algorithm in 2006.
Effect of Tubular Inhomogeneities on Feedback-Mediated Dynamics of a Compliant Thick Ascending Limb
Hwayeon Ryu, Duke University, Durham, NCmentored by Dr. Anita T. Layton
Abstract: The tubuloglomerular feedback (TGF) system, a negative feedback loop in the kidney, is known as a key controller of glomerular filtration rate to mediate oscillations in tubular fluid pressure and flow, and in NaCl concentration in the tubular fluid of the thick ascending limb (TAL). In this study, we used a mathematical model of the rat thick ascending limb of the loop of Henle to study the effect of spatially inhomogeneous TAL NaCl active transport rate, spatially inhomogeneous tubular radius, and compliance of the tubular walls on TGF-mediated dynamics. A bifurcation analysis of the TGF model equations was performed by finding roots of the characteristic equation, and numerical simulations of the full model equations were conducted to assist in the interpretation of the bifurcation analysis and to validate its results. Model results suggest that a higher TAL NaCl active transport rate, or a smaller TAL radius near the loop-bend gives rise to stable oscillatory solutions even with zero TGF delay. In addition, introduction of a compliance to the TAL walls yields consistent results, that TAL compliance increases the tendency of the model TGF system to oscillate, with our previous studies of a compliant TAL model.
Dimension Reduction, Laplacians, and Cheeger Numbers
John J. Steenbergen, Duke University, Durham, NCmentored by Dr. Sayan Mukherjee
Abstract: The graph Laplacian is used by the Laplacian Eigenmaps algorithm to perform dimension reduction on data. Many other dimension reduction methods (Locally Linear Embedding (LLE), Hessian LLE, and diffusion maps) bear some relation to the Laplacian as well. What is it that makes the graph Laplacian so useful? This question has in the past been answered by relating the graph Laplacian to the Cheeger number and to the Laplace-Beltrami operator on manifolds (which itself relates to the continuous Cheeger number on manifolds). Given that graphs are special cases of simplicial complexes and that the graph Laplacian is just one of the many combinatorial Laplacians (one for each dimension), we are led to the following question. Can this framework of Laplacians and Cheeger numbers, and Laplacian-based dimension reduction methods themselves, be 'scaled' to higher dimensions? We introduce some recent research in this direction and roughly discuss what it might mean to use higher-order Laplacians to perform dimension reduction.
3D Computational Models of Flagella With and Without Hispid Hairs
Olga Stulov, State University of New York at New Paltz, NYmentored by Dr. Xingzhou Yang
Abstract: A flagellum is a hair like organelle projecting from the cell body of many microorganisms, such as bacterium E. Coli, green algae Chlamydomonas, Choanoflagellates etc. These microorganisms swim using the helical wave like flagellar propulsion. Biologists found that some flagella have hispid hairs, which are also called mastigonemes, and some do not. We build mechanical and numerical models to validate some interesting biological phenomena due to the structure difference of these flagella. In our models, the flagella and mastigonemes are treated as the elastic structure immersed in the incompressible viscous fluid. Since the Reynolds number is nearly zero, this research is useful in micro-robot design on human-related medical applications. Numerical simulations of the 3D flagella with and without hispid hairs are presented.
Applying Algebraic Concepts to Translate Agent Based Models for Efficient Analysis
Alice Toms, North Carolina State University, Raleigh, NCmentored by Dr. Franziska Hinkelmann
joint work with Matt Oremland, Hussein Al-Asadi, Atsya Kumano, Lauren Ohm, Reinhard Laubenbacher
Abstract: Biological systems are generally complex and have been modeled using discrete, agent-based models. These models suffer efficiency loss due to time-consuming simulation runs. By translating a model into a polynomial dynamical system (PDS), a framework is created for faster analysis. Using concepts from abstract algebra, polynomials are create to represent agent interactions and the complexity of the model is reduced while preserving key system dynamics. The feasibility of this method is demonstrated by translating an agent based model of the human innate immune response system into a PDS.
An Optional Unrelated-Question Randomized Response Model
Anna Tuck, UNCGcoauthored by Jennifer Figueroa, UNCG
mentored by Dr. Sat Gupta and Dr. May Crowe
Abstract: Randomized response models were introduced by Warner (1965) to circumvent respondent bias in the face of sensitive questions. Since the original Warner model was introduced 45 years ago, many variations of this model have been introduced. One such model that has been widely used in literature is the “Unrelated Question Model” of Greenberg et al. (1969) which gives the respondent an option to randomly select one of two questions - the actual sensitive question or an unrelated non-sensitive question. The researcher does not know which question was answered by the respondent. This model is known to perform better than the Warner model.
The main focus of the current research is to improve the original unrelated question model by allowing the respondent to answer truthfully if the question is considered non-sensitive, and answer the question using the unrelated question model if the question is deemed sensitive. Such models, known as Optional RRT models, were introduced by Gupta (2001) and Gupta et al. (2002) and were shown to be more effective than the corresponding non-optional models. We provide the theoretical framework for two different optional unrelated question RRT models, examine their mathematical properties, compare these models with the original unrelated question model, and validate the models using computer simulations.
Efficient Community Identification in Complex Networks
Mahadevan Vasudevan, University of Central Florida, Orlando, FLmentored by Dr. Narsingh Deo
Abstract: Complex networks are large, dynamic, random graphs modeled to replicate interactions among entities in real-world complex systems (e.g., the Internet, the World Wide Web, online social networks – Facebook, Twitter, etc., and the human connectome). These networks differ from the classical Erdös-Rényi random graphs in terms of network properties such as degree distribution, average distance and clustering. Existence of communities is one such property inherent to complex networks. A community may be defined informally as a locally-dense subgraph, of a significant size, in a large, globally-sparse graph. Such communities are of interest in various disciplines— including graph theory, physics, statistics, sociology, biology, and linguistics. At least two different questions may be posed on the community structure in large networks: (i) Given a network, detect or extract all (i.e., sets of nodes that constitute) communities; and (ii) Given a node in the network, identify the best community that the given node belongs to, if there exists one. Several algorithms have been proposed to solve the former problem, known as Community Discovery. The latter problem, known as Community Identification, has also been studied, but to a much smaller extent. Both these problems have been shown to be NP-complete, and a number of approximate algorithms have been proposed in recent years. In this paper, we discuss the various community definitions in the literature and analyze the algorithms for identifying communities. We propose an alternative definition of a community based on the average degree of the induced subgraph. Also, we propose a novel algorithm to identify community in complex networks based on maximizing the average degree.
A Fast Computational Approach to Implement Flexible Random Effects Densities for Generalized Linear and Nonlinear Mixed Models
David M. Vock, NC State University, Raleigh, NCmentored by Dr. Marie Davidian and Dr. Anastasios (Butch) Tsiatis
Abstract: Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the vector of random effects are Gaussian. However, this assumption may be unrealistic in some applications and misspecification of the random effects may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because it is difficult to test if the random effects are Gaussian, previous research has recommend using a flexible random effects density. Approximating the likelihood of these models, to obtain maximum likelihood estimates, requires one to approximate the integral of the likelihood conditioned on the random effects with respect to the flexible random effects density. However, using standard off-the-shelf integral approximations can be computationally taxing when complex random effects densities are used for GLMMs and NLMMs. To approximate the above integral, we propose to use non-adaptive Gaussian quadrature with the quadrature points centered and scaled using the empirical Bayes estimates derived from assuming Gaussian random effects. We show this numerical integration approach better approximates the likelihood and requires fewer quadrature points than traditional non-adaptive Gaussian quadrature which centers the quadrature points at zero and scales them based on the estimated variance of the random effects. The proposed approach does not lead to poorer results compared to adaptive Gaussian quadrature but avoids computing the empirical Bayes estimates for each iteration of the maximization, which requires a separate (complex) maximization for each individual in the data set. We illustrate the method using a publically available data set for toenail infection and discuss how it can be implemented using commercially available software.
Arc length - A New Approach to Measure the Risk
Tharanga Wickramarachchi, Clemson University, Clemson, SCmentored by Dr. Colin Gallagher and Dr. Robert Lund
Abstract: The necessity of more trustworthy methods for measuring the risk (volatility) of financial assets floats on the surface with the global market downturn. Nowadays investors are more vigilant when investments are made on markets. Therefore it is more of a requirement to figure out companies or sectors they should put on money so that the risk is minimized. In this project we propose the arc length as a tool of quantifying the risk of a financial time series. As the main result, we prove the functional central limit theorem for the sample arc length of a multivariate time series under finite second moment conditions. This second moment conditions play a significant role since empirical evidence suggests that most of the asset returns have finite second moments, but infinite fourth moments. We show that the limit theory holds for most popular models of log returns such as ARMA, GARCH and stochastic volatility model families. As an application, changepoints in the volatility of the Dow Jones Index is investigated using the CUSUM statistic based on the sample arc lengths. The simulation studies show that the arc length outperforms squared returns, which holds the functional central limit theorem only under finite fourth moment conditions and performs in a relatively similar manner as absolute returns. Comparison of time series in terms of volatility is also done as another application. At the end of the day results tell us the arc length is useful in measuring the risk of financial time series.
On Cartesian products of graphs and the Roman domination function
Tony Yaacoub, Clayton State University, Morrow, GAmentored by Dr. Elliot J. Krop
Abstract: For any graph G, the Roman domination function of G is a function f that maps the vertices of G to the set {0,1,2} such that every vertex with 0 has a neighbor with 2. The Roman dominating number of G, RDF(G), is the minimum sum of all labels over all Roman dominating functions of G. We apply the method of S.Suen and J.Tarr from their work on Vizing's conjecture, as well as that of Y. Wu, to show an inequality for the Roman dominating number of the Cartesian product of two graphs in terms of the Roman dominating numbers and dominating numbers of the two graphs.