Dissertations and Theses

  1. Kalin R. Kiesling, "Weight Window Isosurface Geometries for Monte Carlo Radiation Transport Variance Reduction", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2022-01)
    In order to perform accurate Monte Carlo (MC) simulations, which is a stochastic method resulting in uncertainty, variance reduction (VR) techniques are often necessary to reduce the relative error for quantities of interest. The use of weight windows (WWs) is a common VR method in which the statistical weight of particles are changed based on various parameters in the simulation. WWs are most commonly represented as a Cartesian WW mesh (CWWM) where WWs are defined across all energies on each mesh voxel. For large, geometrically complex problems, these meshes often need to be developed with fine resolution over the entire spatial domain in order to capture necessary fine detail in some regions of the geometry. This can cause the memory footprint of these meshes to be extremely large and computationally prohibitive. Furthermore, CWWMs are not necessarily efficient in their implementation with respect to when particle weight is checked and updated. This dissertation work presents a novel method for representing WWs aimed at addressing the computational limitations of CWWMs while also improving VR efficiency. In this method, the WWs are transformed into a faceted mesh geometry, known as a WW isosurface geometry (WWIG), where the surfaces are the isosurfaces derived from the WW values in a CWWM. The WWIGs can then be used during particle tracking with the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit, which allows for particle tracking on arbitrarily complex geometries. In this work, an algorithm for using WWIGs for MC VR has been implemented in DAGMC coupled with Monte Carlo N-Particle transport code (MCNP) (DAG-MCNP) 6.2. Initial verification and demonstration experiments show that the WWIG method performs accurate and comparable VR to using CWWMs. Further analysis has been done to demonstrate how changing mesh geometric features of the WWIGs affects computational performance during MC radiation transport. Depending on parameters set for generating the WWIGs and the starting CWWM, the isosurfaces of the WWIGs can vary in mesh coarseness, surface roughness, and spacing. In this work, we explore how these different geometric features of the WWIGs affect the memory footprint and computational performance during variance reduction for Monte Carlo radiation transport. In the end, we see that using WWIGs for MC VR improves WW efficiency and is comparable in performance to using CWWMs
    @phdthesis{kiesling_weight_2022,
    	address = {Madison, WI},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Weight {Window} {Isosurface} {Geometries} for {Monte} {Carlo} {Radiation} {Transport} {Variance} {Reduction}},
    	url = {https://depot.library.wisc.edu/repository/fedora/1711.dl:HHJQGAKFHSOFW9B/datastreams/REF/content},
    	abstract = {In order to perform accurate Monte Carlo (MC) simulations, which is a stochastic method resulting in uncertainty, variance reduction (VR) techniques are often necessary to reduce the relative error for quantities of interest. The use of weight windows (WWs) is a common VR method in which the statistical weight of particles are changed based on various parameters in the simulation. WWs are most commonly represented as a Cartesian WW mesh (CWWM) where WWs are defined across all energies on each mesh voxel. For large, geometrically complex problems, these meshes often need to be developed with fine resolution over the entire spatial domain in order to capture necessary fine detail in some regions of the geometry. This can cause the memory footprint of these meshes to be extremely large and computationally prohibitive. Furthermore, CWWMs are not necessarily efficient in their implementation with respect to when particle weight is checked and updated. 
    
    This dissertation work presents a novel method for representing WWs aimed at addressing the computational limitations of CWWMs while also improving VR efficiency. In this method, the WWs are transformed into a faceted mesh geometry, known as a WW isosurface geometry (WWIG), where the surfaces are the isosurfaces derived from the WW values in a CWWM. The WWIGs can then be used during particle tracking with the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit, which allows for particle tracking on arbitrarily complex geometries.
    
    In this work, an algorithm for using WWIGs for MC VR has been implemented in DAGMC coupled with Monte Carlo N-Particle transport code (MCNP) (DAG-MCNP) 6.2. Initial verification and demonstration experiments show that the WWIG method performs accurate and comparable VR to using CWWMs. Further analysis has been done to demonstrate how changing mesh geometric features of the WWIGs affects computational performance during MC radiation transport. Depending on parameters set for generating the WWIGs and the starting CWWM, the isosurfaces of the WWIGs can vary in mesh coarseness, surface roughness, and spacing. In this work, we explore how these different geometric features of the WWIGs affect the memory footprint and computational performance during variance reduction for Monte Carlo radiation transport. In the end, we see that using WWIGs for MC VR improves WW efficiency and is comparable in performance to using CWWMs},
    	school = {University of Wisconsin-Madison},
    	author = {Kiesling, Kalin R.},
    	month = jan,
    	year = {2022},
    }
    
  2. Philip Britt, "Angular Importance Sampling for Forward and Adjoint Monte Carlo Radiation Transport", PhD, University of Wisconsin-Madison, (12/18/2021)
    Variance reduction is an important tool to increase the rate of convergence in certain configurations of Monte Carlo problems. Methods such as CADIS are particularly useful to achieve this increased rate of convergence. However, CADIS does not include information for direction phase space, and an equivalent method has not been used for the adjoint Monte Carlo method. In this work, the benefits of including direction information in a weight window and weight target (a new type of importance sampling technique presented here) are analyzed and explored, along with a way to use importance sampling theory on the adjoint Monte Carlo method
    @phdthesis{britt_angular_2021,
    	address = {Madison, WI},
    	type = {{PhD}},
    	title = {Angular {Importance} {Sampling} for {Forward} and {Adjoint} {Monte} {Carlo} {Radiation} {Transport}},
    	url = {https://depot.library.wisc.edu/repository/fedora/1711.dl:TAHOHYAYQKRUA84/datastreams/REF/content},
    	abstract = {Variance reduction is an important tool to increase the rate of convergence in certain configurations of Monte Carlo problems. Methods such as CADIS are particularly useful to achieve this increased rate of convergence. However, CADIS does not include information for direction phase space, and an equivalent method has not been used for the adjoint Monte Carlo method. In this work, the benefits of including direction information in a weight window and weight target (a new type of importance sampling technique presented here) are analyzed and explored, along with a way to use importance sampling theory on the adjoint Monte Carlo method},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Britt, Philip},
    	month = dec,
    	year = {2021},
    }
    
  3. Arrielle C. Opotowsky, "Spent Nuclear Fuel Attribution Using Statistical Methods: Impacts of Information Reduction on Prediction Performance", PhD, The University of Wisconsin - Madison, (2021)
    Nuclear forensics is a nuclear security capability that is broadly defined as material attribution in the event of a nuclear incident. Improvement and research is needed for technical components of this process. One such area is the provenance of non-detonated special nuclear material; studied here is spent nuclear fuel (SNF), which is applicable in a scenario involving the unlawful use of commercial byproducts from nuclear power reactors. The experimental process involves measuring known forensics signatures to ascertain the reactor parameters that produced the material, assisting in locating its source. This work proposes the use of statistical methods to determine these quantities instead of empirical relationships. The purpose of this work is to probe the feasibility of this method with a focus on field-deployable detection. Thus, two experiments are conducted, the first informing the second by providing a baseline of performance. Both experiments use simulated nuclide measurements as observations and reactor operation parameters as the prediction goals. First, machine learning algorithms are employed with full-knowledge training data, i.e., nuclide vectors from simulations that mimic lab-based mass spectrometry. The error in the mass measurements is artificially increased to probe the prediction performance with respect to information reduction. Second, this machine learning workflow is performed on training data analogous to a field-deployed gamma detector that can only measure radionuclides. The detector configuration is varied so that the information reduction now represents decreasing detector energy resolution. The results are evaluated using the error of the reactor parameter predictions. The reactor parameters of interest are the reactor type and three quantities that can attribute SNF: burnup, initial 235U enrichment, and time since irradiation. The algorithms used to predict these quantities are k-nearest neighbors, decision trees, and maximum log-likelihood calculations. The first experiment predicts all of these quantities well using the three algorithms, except for k-nearest neighbors predicting time since irradiation. For the second experiment, most of the detector configurations predict burnup well, none of them predict enrichment well, and the time since irradiation results perform on or near the baseline. This approach is an exploratory study; the results are promising and warrant further study.
    @phdthesis{opotowsky_spent_2021,
    	address = {United States -- Wisconsin},
    	type = {{PhD}},
    	title = {Spent {Nuclear} {Fuel} {Attribution} {Using} {Statistical} {Methods}: {Impacts} of {Information} {Reduction} on {Prediction} {Performance}},
    	copyright = {Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.},
    	shorttitle = {Spent {Nuclear} {Fuel} {Attribution} {Using} {Statistical} {Methods}},
    	url = {http://www.proquest.com/pqdtglobal/docview/2572584487/abstract/63EF45F6C2D47A2PQ/1},
    	abstract = {Nuclear forensics is a nuclear security capability that is broadly defined as material attribution in the event of a nuclear incident. Improvement and research is needed for technical components of this process. One such area is the provenance of non-detonated special nuclear material; studied here is spent nuclear fuel (SNF), which is applicable in a scenario involving the unlawful use of commercial byproducts from nuclear power reactors. The experimental process involves measuring known forensics signatures to ascertain the reactor parameters that produced the material, assisting in locating its source. This work proposes the use of statistical methods to determine these quantities instead of empirical relationships.
    The purpose of this work is to probe the feasibility of this method with a focus on field-deployable detection. Thus, two experiments are conducted, the first informing the second by providing a baseline of performance. Both experiments use simulated nuclide measurements as observations and reactor operation parameters as the prediction goals. First, machine learning algorithms are employed with full-knowledge training data, i.e., nuclide vectors from simulations that mimic lab-based mass spectrometry. The error in the mass measurements is artificially increased to probe the prediction performance with respect to information reduction. Second, this machine learning workflow is performed on training data analogous to a field-deployed gamma detector that can only measure radionuclides. The detector configuration is varied so that the information reduction now represents decreasing detector energy resolution. The results are evaluated using the error of the reactor parameter predictions.
    The reactor parameters of interest are the reactor type and three quantities that can attribute SNF: burnup, initial 235U enrichment, and time since irradiation. The algorithms used to predict these quantities are k-nearest neighbors, decision trees, and maximum log-likelihood calculations. The first experiment predicts all of these quantities well using the three algorithms, except for k-nearest neighbors predicting time since irradiation. For the second experiment, most of the detector configurations predict burnup well, none of them predict enrichment well, and the time since irradiation results perform on or near the baseline. This approach is an exploratory study; the results are promising and warrant further study.},
    	language = {English},
    	urldate = {2021-10-15},
    	school = {The University of Wisconsin - Madison},
    	author = {Opotowsky, Arrielle C.},
    	year = {2021},
    	note = {ISBN: 9798538113514},
    	keywords = {Machine learning, Nuclear forensics, Nuclear security, Reactor parameter prediction, Spent nuclear fuel attribution, Statistical methods},
    }
    
  4. Antara Khadria, " Low-Carbon Energy Solutions for the UW-Madison Campus", MS Environment and Resources, University of Wisconsin-Madison, (12/30/2020)
    Current district heating systems rely on fossil fuels for generating steam, which is then distributed through insulated steam pipes to end customers. The UW-Madison campus has its own district heating and cooling plants that primarily supply the on-campus thermal demand. In this study, we analyze the feasibility of supplying this thermal demand with low-carbon technologies such as solar, wind, storage, and nuclear. In order to conduct this economic analysis, we use the HOMER Pro optimization tool which considers different system configurations based on the input components and chooses the system that has the least overall net-present cost. For our simulations, we consider different combinations of the low-carbon technologies and use a range of cheap, median and expensive cost inputs for each technology, resulting in an array of case scenarios. We compare these scenarios based on important metrics such as rated capacity, electric production, excess electricity, and annualized system cost. Our results show that systems deploying a combination of these technologies tend to be cheaper than those deploying individual technologies. Considering renewable technologies with storage alone results in high system costs and large amounts of excess electricity, which can be alleviated using a system that is more reliant on nuclear energy. Another key observation made in this study is that total system costsare highly sensitive to the cost inputs provided, which highlights the importance ofconsidering up-to-date cost estimates in such analyses.
    @phdthesis{khadria_low-carbon_2020,
    	address = {Madison, WI, United States},
    	type = {{MS} {Environment} and {Resources}},
    	title = {Low-{Carbon} {Energy} {Solutions} for the {UW}-{Madison} {Campus}},
    	abstract = {Current district heating systems rely on fossil fuels for generating steam, which is then distributed through insulated steam pipes to end customers. The UW-Madison campus has its own district heating and cooling plants that primarily supply the on-campus thermal demand. In this study, we analyze the feasibility of supplying this thermal demand with low-carbon technologies such as solar, wind, storage, and nuclear. In order to conduct this economic analysis, we use the HOMER Pro optimization tool which considers different system configurations based on the input components and chooses the system that has the least overall net-present cost. For our simulations, we consider different combinations of the low-carbon technologies and use a range of cheap, median and expensive cost inputs for each technology, resulting in an array of case scenarios. We compare these scenarios based on important metrics such as rated capacity, electric production, excess electricity, and annualized system cost. Our results show that systems deploying a combination of these technologies tend to be cheaper than those deploying individual technologies. Considering renewable technologies with storage alone results in high system costs and large amounts of excess electricity, which can be alleviated using a system that is more reliant on nuclear energy. Another key observation made in this study is that total system costsare highly sensitive to the cost inputs provided, which highlights the importance ofconsidering up-to-date cost estimates in such analyses.},
    	school = {University of Wisconsin-Madison},
    	author = {Khadria, Antara},
    	month = dec,
    	year = {2020},
    }
    
  5. Ryan M. Dailey, " Modeling Methods for Low Carbon Power at Federal Installations Using Nuclear Microreactors", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2020)
    With current electrical grid infrastructure aging within the United States, potential supplemental and replacement technologies are under investigation. One such potential supplement to the U.S. electrical grid is the development of microgrid infrastructure, which allows for more localized control of electricity generation and distribution. These microgrids could also allow for two-way sale of electricity between the microgrid’s assets and the grid-scale utility’s assets. Typically, microgrids will have a mix of renewable energy resources like wind or solar, energy storage solutions such as batteries, and fossil fuel generators as either emergency generators or baseload power suppliers. One study, titled: “Analysis of the Case for Federal Support of Micro-Scale Nuclear Reactors to Provide Secure Power at U.S. Government Installations”, sought to investigate whether nuclear microreactors were suitable for usage in microgrids. Nuclear microreactor designs, themselves an emerging technology, are composed of significantly smaller versions of next-generation nuclear reactor concepts. Microreactors are typically defined as devices that have an electrical power level between 1 and 10 MWe. These designs aim to be built in a factory and shipped to the intended site, rather than constructed onsite. Through the usage of advanced fuel forms and power conversion cycles, there is a goal of higher thermal efficiencies and longer fuel cycles. Due to the smaller intended sizes for microreactors and their goal of increased safety, they are likely to be able to be sited closer to the general population. With the potential for microreactors to succeed in a microgrid setting, it became necessary to build a modeling method for microreactors in a microgrid. To develop a model for a microreactor in a microgrid, a need for a microgrid modeling tool was identified. HOMER Pro was the final choice for this modeling tool, as it was sufficiently flexible for modeling all major microgrid components and had the capacity for the creation of a customized generator. Once HOMER was identified as the modeling tool, two potential microreactor modeling methods were identified. The first modeling method identified, named the Continuous Feed Model, operated as a fossil fuel generator with microreactor performance parameters. A second model was created, called the Capital Cost Method, which acts as an incremental cost method that matches realistic core refueling cycles. Once the two microreactor modeling methods were completed, two analyses were performed with modeling methods and down-scaled electrical grid data from the University of Wisconsin-Madison campus. The first analysis compared the performance of these technologies on the UW-Madison campus if it were supported by only microreactors or natural gas generators. At the best cost and performance parameters, both microreactor models were competitive against natural gas. As the assumed costs grew, microreactors quickly fell off in terms of competitiveness. The second analysis, dubbed the “Green-case”, investigated if a microreactor could make a cheaper microgrid solution than a purely renewable energy (solar and battery storage) case. In this Green-case analysis, it was found that both microreactor models predicted a noteworthy reduction in the cost of a microgrid solution if a microreactor was included, at a variety of performance parameters.
    @phdthesis{dailey_modeling_2020,
    	address = {Madison, WI},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Modeling {Methods} for {Low} {Carbon} {Power} at {Federal} {Installations} {Using} {Nuclear} {Microreactors}},
    	abstract = {With current electrical grid infrastructure aging within the United States, potential supplemental and replacement technologies are under investigation. One such potential supplement to the U.S. electrical grid is the development of microgrid infrastructure, which allows for more localized control of electricity generation and distribution. These microgrids could also allow for two-way sale of electricity between the microgrid’s assets and the grid-scale utility’s assets. Typically, microgrids will have a mix of renewable energy resources like wind or solar, energy storage solutions such as batteries, and fossil fuel generators as either emergency generators or baseload power suppliers. One study, titled: “Analysis of the Case for Federal Support of Micro-Scale Nuclear Reactors to Provide Secure Power at U.S. Government Installations”, sought to investigate whether nuclear microreactors were suitable for usage in microgrids.
    
    Nuclear microreactor designs, themselves an emerging technology, are composed of significantly smaller versions of next-generation nuclear reactor concepts. Microreactors are typically defined as devices that have an electrical power level between 1 and 10 MWe. These designs aim to be built in a factory and shipped to the intended site, rather than constructed onsite. Through the usage of advanced fuel forms and power conversion cycles, there is a goal of higher thermal efficiencies and longer fuel cycles. Due to the smaller intended sizes for microreactors and their goal of increased safety, they are likely to be able to be sited closer to the general population. With the potential for microreactors to succeed in a microgrid setting, it became necessary to build a modeling method for
    microreactors in a microgrid.
    
    To develop a model for a microreactor in a microgrid, a need for a microgrid modeling tool was identified. HOMER Pro was the final choice for this modeling tool, as it was sufficiently flexible for modeling all major microgrid components and had the capacity for the creation of a customized generator. Once HOMER was identified as the modeling tool, two potential microreactor modeling methods were identified. The first modeling method identified, named the Continuous Feed Model, operated as a fossil fuel generator with microreactor performance parameters. A second model was created, called the Capital Cost Method, which acts as an incremental cost method that matches realistic core refueling cycles.
    
    Once the two microreactor modeling methods were completed, two analyses were performed with modeling methods and down-scaled electrical grid data from the University of Wisconsin-Madison campus. The first analysis compared the performance of these technologies on the UW-Madison campus if it were supported by only microreactors or natural gas generators. At the best cost and performance parameters, both microreactor models were competitive against natural gas. As the assumed costs grew, microreactors quickly fell off in terms of competitiveness. The second analysis, dubbed the “Green-case”, investigated if a microreactor could make a cheaper microgrid solution than a purely renewable energy (solar and battery storage) case. In this Green-case analysis, it was found that both microreactor models predicted a noteworthy reduction in the cost of a microgrid solution if a microreactor was included, at a variety of performance parameters.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Dailey, Ryan M.},
    	year = {2020},
    }
    
  6. Chelsea D'Angelo, "Variance Reduction for Multi-physics Analysis of Moving Systems", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (05/16/2019)
    The quantification of the shutdown dose rate (SDR) caused by photons emitted by activated structural materials is an important and necessary step of the design process of fusion energy systems (FES). FES are purposefully designed with modular components that can be moved out of a facility after shutdown for maintenance. It is particularly important to accurately quantify the SDR during maintenance procedures that may cause facility personnel to be in closer proximity to activated equipment. This type of analysis requires neutron and photon transport calculations coupled by activation analysis to determine the SDR. Due to its ability to obtain highly accurate results, the Monte Carlo (MC) method is often used for both transport operations, but the computational expense of obtaining results with low error in systems with heavy shielding can be prohibitive. However, variance reduction (VR) methods can be used to optimize the computational efficiency by artificially increasing the simulation of events that will contribute to the quantity of interest. One hybrid VR technique used to optimize the initial transport step of a multi-step process is known as the Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method. The basis of MS-CADIS is that the importance function used in each step of the problem must represent the importance of the particles to the final objective function. As the spatial configuration of the materials changes, the probability that they will contribute to the objective function also changes. In the specific case of SDR analysis, the importance function for the neutron transport step must capture the probability of materials to become activated and subsequently emit photons that will make a significant contribution to the SDR. The Groupwise Transmutation (GT)-CADIS method is an implementation of MS-CADIS that optimizes the neutron transport step of SDR calculations. GT-CADIS generates an adjoint neutron source based on certain assumptions and approximations about the transmutation network. This source is used for adjoint transport and the resulting flux is used to generate the biasing parameters to optimize the forward neutron transport. For systems that undergo movement, a new hybrid deterministic/MC VR technique, the Time-integrated (T)GT-CADIS method, that adapts GT-CADIS for dynamic systems by calculating a time-integrated adjoint neutron source was developed. This work demonstrates the tools and workflows necessary to efficiently calculate quantities of interest resulting from coupled, multi-physics processes in dynamic systems.
    @phdthesis{dangelo_variance_2019,
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Variance {Reduction} for {Multi}-physics {Analysis} of {Moving} {Systems}},
    	url = {https://depot.library.wisc.edu/repository/fedora/1711.dl:KT62IHINMM6JZ9A/datastreams/REF/content},
    	abstract = {The quantification of the shutdown dose rate (SDR) caused by photons emitted by activated structural materials is an important and necessary step of the design process of fusion energy systems (FES). FES are purposefully designed with modular components that can be moved out of a facility after shutdown for maintenance. It is particularly important to accurately quantify the SDR during maintenance procedures that may cause facility personnel to be in closer proximity to activated equipment. This type of analysis requires neutron and photon transport calculations coupled by activation analysis to determine the SDR. Due to its ability to obtain highly accurate results, the Monte Carlo (MC) method is often used for both transport operations, but the computational expense of obtaining results with low error in systems with heavy shielding can be prohibitive. However, variance reduction (VR) methods can be used to optimize the computational efficiency by artificially increasing the simulation of events that will contribute to the quantity of interest.
    
    One hybrid VR technique used to optimize the initial transport step of a multi-step process is known as the Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method. The basis of MS-CADIS is that the importance function used in each step of the problem must represent the importance of the particles to the final objective function. As the spatial configuration of the materials changes, the probability that they will contribute to the objective function also changes. In the specific case of SDR analysis, the importance function for the neutron transport step must capture the probability of materials to become activated and subsequently emit photons that will make a significant contribution to the SDR. The Groupwise Transmutation (GT)-CADIS method is an implementation of MS-CADIS that optimizes the neutron transport step of SDR calculations. GT-CADIS generates an adjoint neutron source based on certain assumptions and approximations about the transmutation network. This source is used for adjoint transport and the resulting flux is used to generate the biasing parameters to optimize the forward neutron transport.
    
    For systems that undergo movement, a new hybrid deterministic/MC VR technique, the Time-integrated (T)GT-CADIS method, that adapts GT-CADIS for dynamic systems by calculating a time-integrated adjoint neutron source was developed. This work demonstrates the tools and workflows necessary to efficiently calculate quantities of interest resulting from coupled, multi-physics processes in dynamic systems.},
    	school = {University of Wisconsin-Madison},
    	author = {D'Angelo, Chelsea},
    	month = may,
    	year = {2019},
    }
    
  7. Moataz S. Harb, "Propagation of Statistical Uncertainty in Mesh-Based Shutdown Dose Rate Calculations", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (03/15/2019)
    In fusion energy systems (FES), high energy neutrons are emitted from the plasma source - due to the D-T fusion reaction - enabling them to penetrate deep in the materials surrounding the core. Energy is then deposited along the path of the neutrons due to interactions with nuclides, resulting in - besides nuclear heating - two main effects; radiation damage and transmutation. Radiation damage causes changes in the macroscopic properties of the materials due to microscopic changes that result from interactions of high energy neutrons with nuclides. Transmutation is caused by the absorption of neutrons by nuclides in the medium and almost always results in the production of radioactive nuclides. Such radioactive nuclides are of importance to FES design and operation as they persist after the shutdown of the facility due to their long half lives. Efforts are directed to quantify the shutdown dose rate (SDR) that results from gamma emitting nuclides produced by transmutation. Monte Carlo (MC) methods are favored over deterministic methods for the simulation of particles transport in FES due to complexity of the models and to reduce the uncertainties/errors of the predicted particle flux distributions due to approximations. The rigorous 2-step method (R2S) relies on dedicated activation calculations to predict the photon emission density distribution, and is widely used for SDR quantification. It involves a neutron transport step, activation analysis to obtain the photon emission density, and a photon transport step to calculate the SDR. It is often the case that neutrons suffer attenuation in traversing the medium from the plasma source - due to interactions with nuclides - and that results in a steep gradient in the neutron flux. Variance reduction (VR) tools have been developed with the primary goal of pushing neutrons - simulated particles - to regions of the phase-space that are of importance for the quantities under consideration in order to reduce the uncertainty in the MC results. The recently developed Group- wise Transmutation - Consistent Adjoint Driven Importance Sampling (GT CADIS) method provides a capability to obtain the photon emission density distribution as a function of the energy dependent group-wise neutron flux distribution via linearization of the transmutation operator. Using the photon emission density it is possible to overcome previous difficulties of the error propagation in the R2S workflow. One primary concern with the R2S workflow is that only the contribution of the photon transport step is considered as a measure of the uncertainty of the calculated SDR, while the contribution from the neutron transport step remains undefined. Previous methods have tried to tackle this issue but there was always difficulty in obtaining the correlation of the neutron fluxes and that resulted in implementing either impractical approximations or just calculating the upper and lower bounds of the uncertainty of the SDR. In this document, the R2S workflow has been investigated. First, issues related to the neutron transport step and the uncertainty of the photon emission density have been addressed. Second, a scheme was developed to propagate the statistical uncertainty of the neutron transport step to the SDR. Starting with the neutron transport step, a variation of the main R2S that aimed at increasing the resolution while reducing the computational expenses was found to introduce systematic errors that might undermine the gain in the computational cost. One of the difficulties in propagating the neutron flux uncertainty to the photon emission density is obtaining the correlation values between the neutron fluxes in different energy groups and mesh voxels. By utilizing the GT method, an approximation to the calculation of the correlation coefficients has been investigated building on the fact that using group-wise transmutation the correlation terms needed were greatly reduced. It was discovered that the correlation between the neutron fluxes in different energy groups is a function of the material composition. That facilitated obtaining the needed correlation matrix and quantifying the uncertainty of the photon emission density. A method to propagate the photon source uncertainty to the SDR by random sampling was developed and was demonstrated to be efficient on various types of numerical experiments as well as a production level problem.
    @phdthesis{harb_propagation_2019,
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Propagation of {Statistical} {Uncertainty} in {Mesh}-{Based} {Shutdown} {Dose} {Rate} {Calculations}},
    	url = {https://depot.library.wisc.edu/repository/fedora/1711.dl:6MDCBYJEASBZ78Z/datastreams/REF/content},
    	abstract = {In fusion energy systems (FES), high energy neutrons are emitted from the
    plasma source - due to the D-T fusion reaction - enabling them to penetrate deep in the materials surrounding the core. Energy is then deposited along the path of the neutrons due to interactions with nuclides, resulting in - besides nuclear heating - two main effects; radiation damage and transmutation. Radiation damage causes changes in the macroscopic properties of the materials due to microscopic changes that result from interactions of high energy neutrons with nuclides. Transmutation is caused by the absorption of neutrons by nuclides in the medium and almost always results in the production of radioactive nuclides. Such radioactive nuclides are of importance to FES design and operation as they persist after the shutdown of the facility due to their long half lives. Efforts are directed to quantify the shutdown dose rate (SDR) that results from gamma emitting nuclides produced by transmutation. Monte Carlo (MC) methods are favored over deterministic methods for the simulation of particles transport in FES due to complexity of the models and to reduce the uncertainties/errors of the predicted particle flux distributions due to approximations. The rigorous 2-step method (R2S) relies on dedicated activation calculations to predict the photon emission density distribution, and is widely used for SDR quantification. It involves a neutron transport step, activation analysis to obtain the photon emission density, and a photon transport step to calculate the SDR.
    It is often the case that neutrons suffer attenuation in traversing the medium from the plasma source - due to interactions with nuclides - and that results in a steep gradient in the neutron flux. Variance reduction (VR) tools have been developed with the primary goal of pushing neutrons - simulated particles - to regions of the phase-space that are of importance for the quantities under consideration in order to reduce the uncertainty in the MC results. The recently developed Group- wise Transmutation - Consistent Adjoint Driven Importance Sampling (GT CADIS) method provides a capability to obtain the photon emission density distribution as a function of the energy dependent group-wise neutron flux distribution via linearization of the transmutation operator. Using the photon emission density it is possible to overcome previous difficulties of the error propagation in the R2S workflow. One primary concern with the R2S workflow is that only the contribution of the photon transport step is considered as a measure of the uncertainty of the calculated SDR, while the contribution from the neutron transport step remains undefined. Previous methods have tried to tackle this issue but there was always difficulty in obtaining the correlation of the neutron fluxes and that resulted in implementing either impractical approximations or just calculating the upper and lower bounds of the uncertainty of the SDR.
    In this document, the R2S workflow has been investigated. First, issues related to the neutron transport step and the uncertainty of the photon emission density have been addressed. Second, a scheme was developed to propagate the statistical uncertainty of the neutron transport step to the SDR. Starting with the neutron transport step, a variation of the main R2S that aimed at increasing the resolution while reducing the computational expenses was found to introduce systematic errors that might undermine the gain in the computational cost. One of the difficulties in propagating the neutron flux uncertainty to the photon emission density is obtaining the correlation values between the neutron fluxes in different energy groups and mesh voxels. By utilizing the GT method, an approximation to the calculation of the correlation coefficients has been investigated building on the fact that using group-wise transmutation the correlation terms needed were greatly reduced. It was discovered that the correlation between the neutron fluxes in different energy groups is a function of the material composition. That facilitated obtaining the needed correlation matrix and quantifying the uncertainty of the photon emission density. A method to propagate the photon source uncertainty to the SDR by random sampling was developed and was demonstrated to be efficient on various types of numerical experiments as well as a production level problem.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Harb, Moataz S.},
    	month = mar,
    	year = {2019},
    	keywords = {Correlation, Error Propagation, FESS-FNSF, R2S, SDR},
    }
    
  8. Alexander Swenson, " Surrogate Reactor Modeling for Space Electrical System Mass Optimization", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2019-01-11 1/11/19)
    Long-term space exploration to Mars and other parts of the solar system will require significant amounts of electrical power. Nuclear reactors have long been considered to meet these power needs. Minimizing the total system mass of electrical systems is crucial to minimize launch costs. A surrogate reactor model was developed to optimize the mass of a nuclear reactor for space applications. The model was designed to be used in conjunction with power cycle component mass models to explore the tradeoff between component masses of with the goal of minimizing the total mass of the system. The reactor mass model needed to rapidly execute in order to be useful in an optimization algorithm. To this end, a surrogate mass model was developed that did not rely on high-fidelity reactor physics and finite element analysis tools. The model included a reactivity constraint to support a 10 year mission life and a thermal constraint to ensure fuel integrity. The reactor mass model was coupled to an external power cycle model through flow inputs. Coolant flow conditions at the inlet and outlet of the core define thermophysical properties and core thermal power requirements that were used by the thermal hydraulic model to estimate required volumetric power densities in the reactor core. The reactivity constraint was modeled as a beginning of life (BOL) excess reactivity target. The target was chosen using a large dataset of depletion calculations and was deemed sufficient to ensurve the optimized designs had sufficient excess reactivity to sustain 10 years of full-power operation. Important neutronics and operational parameters were also tested in this dataset to determine which parameters were most effective for use in a reduced-order reactivity model. A reduced-order surrogate reactivity model was developed to constrain a minimum-mass reactor to a target BOL reactivity. The reduced-order model was created from a large dataset of criticality calculations using MCNP6.1. This model was used to constrain the reactor geometry in the thermal hydraulic model. Thousands of MCNP keff calculations were performed to generate a relationship between core radius, fuel fraction, reflector thickness, and keff. This dataset was represented with trilinear interpolation in order to develop a mass-minimized relationship between core radius and fuel fraction that met the target excess reactivity. The reactor mass model was ultimately a neutronically-constrained thermal hydraulic model. The reactivity model was used to constrain the core radius as a function of fuel fraction in the thermal hydraulic model. The thermal hydraulic model used a root-finding routine to find a fuel fraction that met the required thermal input from an external power cycle model. The model used 1D heat transfer models to determine the fuel fraction that, combined with a constrained core radius, could generate the required thermal input. The result was a mass-optimized, fully constrained reactor design that met coolability and reactivity requirements for a 10 year mission and the given power cycle inputs.
    @phdthesis{swenson_surrogate_2019,
    	address = {Department of Engineering Physics},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Surrogate {Reactor} {Modeling} for {Space} {Electrical} {System} {Mass} {Optimization}},
    	abstract = {Long-term space exploration to Mars and other parts of the solar system will
    require significant amounts of electrical power. Nuclear reactors have long been
    considered to meet these power needs. Minimizing the total system mass of
    electrical systems is crucial to minimize launch costs. A surrogate reactor
    model was developed to optimize the mass of a nuclear reactor for space
    applications. The model was designed to be used in conjunction with power cycle
    component mass models to explore the tradeoff between component masses of 
    with the goal of minimizing the total mass of the system. The
    reactor mass model needed to rapidly execute in order to be useful in an
    optimization algorithm. To this end, a surrogate mass model was developed that
    did not rely on high-fidelity reactor physics and finite element analysis tools.
    The model included a reactivity constraint to support a 10 year mission life and
    a thermal constraint to ensure fuel integrity.
    
    The reactor mass model was coupled to an external power cycle model through flow
    inputs. Coolant flow conditions at the inlet and outlet of the core define
    thermophysical properties and core thermal power requirements that were used by
    the thermal hydraulic model to estimate required volumetric power densities in
    the reactor core.
    
    The reactivity constraint was modeled as a beginning of life (BOL) excess reactivity
    target. The target was chosen using a large dataset of depletion calculations
    and was deemed sufficient to ensurve the optimized designs had sufficient excess reactivity
    to sustain 10 years of full-power operation. Important neutronics and operational
    parameters were also tested in this dataset to determine which parameters were
    most effective for use in a reduced-order reactivity model.
    
    A reduced-order surrogate reactivity model was developed to constrain a
    minimum-mass reactor to a target BOL reactivity. The reduced-order model was
    created from a large dataset of criticality calculations using MCNP6.1. This
    model was used to constrain the reactor geometry in the thermal hydraulic model.
    Thousands of MCNP keff calculations were performed to generate a relationship
    between core radius, fuel fraction, reflector thickness, and keff. This dataset
    was represented with trilinear interpolation in order to develop a
    mass-minimized relationship between core radius and fuel fraction that met the
    target excess reactivity.
    
    The reactor mass model was ultimately a neutronically-constrained thermal
    hydraulic model. The reactivity model was used to constrain the core radius as
    a function of fuel fraction in the thermal hydraulic model. The thermal
    hydraulic model used a root-finding routine to find a fuel fraction that met
    the required thermal input from an external power cycle model. The model used 1D
    heat transfer models to determine the fuel fraction that, combined with a constrained
    core radius, could generate the required thermal input. The result was a
    mass-optimized, fully constrained reactor design that met coolability and reactivity
    requirements for a 10 year mission and the given power cycle inputs.},
    	school = {University of Wisconsin-Madison},
    	author = {Swenson, Alexander},
    	month = jan,
    	year = {2019},
    }
    
  9. Patrick Shriwise, "Geometry Query Optimizations in CAD-Based Tessellations for Monte Carlo Radiation Transport", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin - Madison, (June 8th, 2018)
    The performance of direct CAD-based Monte Carlo Radiation Transport (MCRT) relies heavily on its ability to return geometric queries robustly via ray tracing methods. Current applications of ray tracing for MCRT are robust given that certain requirements are met [48], but cause simulations to run much longer than native code geometry representations. This work explores alternate geometry query methods aimed at reducing the complexity of these operations as well as algorithmic optimization by adapting recent developments in CPU ray tracing for use in engineering analysis. A preconditioning scheme is presented aimed at avoiding unnecessary ray queries for volumes with high collision densities. A model is also developed to inform the application of the preconditioning data structure based on a post facto analysis. Next, a specialized ray tracing kernel for MCRT is presented. As new ray tracing kernels are developed for real-time, photo-realistic rendering, algorithmic approaches have appeared which are demonstrated to be advantageous when applied in radiation transport. In particular, the application of data parallelism in ray tracing for Monte Carlo is demonstrated - resulting in significant performance improvements. Finally, model features resulting in systematic performance degradation commonly found in CAD models for MCRT are studied. Methods are proposed and demonstrated to improve performance of ray tracing kernels in models with these features. The combination of this work is shown to provide improvement factors ranging from 1.1 to 9.54 in simulation run time without loss of robustness for several production analysis models. The final impact of this work is the alleviation of concern for additional computational time in using CAD geometries for MCRT while maintaining the benefit of reduced human time and effort in model preparation and design.
    @phdthesis{shriwise_geometry_2018,
    	address = {Madison, WI},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Geometry {Query} {Optimizations} in {CAD}-{Based} {Tessellations} for {Monte} {Carlo} {Radiation} {Transport}},
    	url = {https://depot.library.wisc.edu/repository/fedora/1711.dl:NNODVI4XHMSOV8F/datastreams/REF/content},
    	abstract = {The performance of direct CAD-based Monte Carlo Radiation Transport (MCRT) relies heavily on its ability to return geometric queries robustly via ray tracing methods. Current applications of ray tracing for MCRT are robust given that certain requirements are met [48], but cause simulations to run much longer than native code geometry representations. This work explores alternate geometry query methods aimed at reducing the complexity of these operations as well as algorithmic optimization by adapting recent developments in CPU ray tracing for use in engineering analysis. A preconditioning scheme is presented aimed at avoiding unnecessary ray queries for volumes with high collision densities. A model is also developed to inform the application of the preconditioning data structure based on a post facto analysis. Next, a specialized ray tracing kernel for MCRT is presented. As new ray tracing kernels are developed for real-time, photo-realistic rendering, algorithmic approaches have appeared which are demonstrated to be advantageous when applied in radiation transport. In particular, the application of data parallelism in ray tracing for Monte Carlo is demonstrated - resulting in significant performance improvements. Finally, model features resulting in systematic performance degradation commonly found in CAD models for MCRT are studied. Methods are proposed and demonstrated to improve performance of ray tracing kernels in models with these features. The combination of this work is shown to provide improvement factors ranging from 1.1 to 9.54 in simulation run time without loss of robustness for several production analysis models. The final impact of this work is the alleviation of concern for additional computational time in using CAD geometries for MCRT while maintaining the benefit of reduced human time and effort in model preparation and design.},
    	language = {English},
    	school = {University of Wisconsin - Madison},
    	author = {Shriwise, Patrick},
    	month = jun,
    	year = {2018},
    	keywords = {CAD, dagmc, mesh, monte carlo, performance, ray tracing},
    }
    
  10. Elliott Dean Biondo, "Hybrid Monte Carlo/Deterministic Neutron Transport for Shutdown Dose Rate Analysis", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (July 21, 2016)
    In fusion energy systems (FES) neutrons are born from a burning plasma and subsequently activate surrounding system components. The photon dose rate after shutdown from the resultant radionuclides must be quantified for maintenance planning. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for this purpose. This requires the formulation of an adjoint neutron source that approximates the transmutation process. In this work one such formulation is introduced which is valid when a specific set of transmutation criteria are met, referred to as the Single Neutron Interaction and Low Burnup (SNILB) criteria. These criteria are quantitatively evaluated for typical FES scenarios and are shown to be met within a reasonable margin. Groupwise Transmutation (GT)-CADIS, proposed here, is an implementation of MS-CADIS that calculates this adjoint neutron source using a series of irradiation calculations. For a simple SDR problem, GT-CADIS provides speedups of 200 ± 100 relative to global variance reduction with the Forward Weighted (FW)-CADIS method and 90,000 ± 50,000 relative to analog. When the SNILB criteria are egregiously violated, GT-CADIS modifications are proposed and are shown to provide significant performance improvements. Finally, GT-CADIS is applied to a production-level problem involving a Spherical Tokamak Fusion Nuclear Science Facility (ST-FNSF) device. This work shows that GT-CADIS is broadly applicable to FES scenarios and will significantly reduce the computational resources necessary for SDR analysis.
    @phdthesis{biondo_hybrid_2016,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Hybrid {Monte} {Carlo}/{Deterministic} {Neutron} {Transport} for {Shutdown} {Dose} {Rate} {Analysis}},
    	url = {https://depot.library.wisc.edu/repository/fedora/1711.dl:ITANHEGGRPM338Z/datastreams/REF/content},
    	abstract = {In fusion energy systems (FES) neutrons are born from a burning plasma and subsequently activate surrounding system components. The photon dose rate after shutdown from the resultant radionuclides must be quantified for maintenance planning. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for this purpose. This requires the formulation of an adjoint neutron source that approximates the transmutation process. In this work one such formulation is introduced which is valid when a specific set of transmutation criteria are met, referred to as the Single Neutron Interaction and Low Burnup (SNILB) criteria. These criteria are quantitatively evaluated for typical FES scenarios and are shown to be met within a reasonable margin. Groupwise Transmutation (GT)-CADIS, proposed here, is an implementation of MS-CADIS that calculates this adjoint neutron source using a series of irradiation calculations. For a simple SDR problem, GT-CADIS provides speedups of 200 ± 100 relative to global variance reduction with the Forward Weighted (FW)-CADIS method and 90,000 ± 50,000 relative to analog. When the SNILB criteria are egregiously violated, GT-CADIS modifications are proposed and are shown to provide significant performance improvements. Finally, GT-CADIS is applied to a production-level problem involving a Spherical Tokamak Fusion Nuclear Science Facility (ST-FNSF) device. This work shows that GT-CADIS is broadly applicable to FES scenarios and will significantly reduce the computational resources necessary for SDR analysis.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Biondo, Elliott Dean},
    	month = jul,
    	year = {2016},
    }
    
  11. Robert W. Carlsen, "Advanced Nuclear Fuel Cycle Transitions: Optimization, Modeling Choices, and Disruptions", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2016-03-29 3/29/16)
    Nuclear fuel cycle analysis is a field focused on understanding and modeling the nuclear industry and ecosystem at a macroscopic level. To date, fuel cycle analysis has mostly involved hand-crafting details of fuel cycle scenarios for investigation. Many different tools have evolved over time to help address the need to investigate both the equilibrium properties of nuclear fuel cycles and the dynamics of transitions between them. There is great potential for computational resources to improve both the quality of answers and the size of questions that can be asked. Cyclus is one of the first nuclear fuel cycle simulators to strongly accommodate larger-scale analysis with its free availability, liberal open-source licensing, and first-class Linux support. Cyclus also provides features that uniquely enable investigating the effects of modeling choices and modeling fidelity within fuel cycle scenarios. This is made possible by the complementary nature of Cyclus’ dynamic resource exchange and plugin based architecture. This work is divided into three major pieces focusing on optimization, investigating effects of modeling choices, and dealing with uncertainty. Effective optimization techniques are developed for automatically determining desirable facility deployment schedules for fuel cycle scenarios with Cyclus. A novel method for mapping optimization variables to deployment schedules is developed. This method allows relationships between reactor types and power capacity constraints to be represented implicitly in the definition of the optimization variables. This not only enables optimizers without constraint support to be used, but it also prevents wasting computational resources searching through many infeasible deployment schedules. With the simplified constraint handling, optimization can be used to analyze larger problems in addition to providing better solutions generally. The developed methodology also enables the deployed power generation capacity over time and the deployment of non-reactor support facilities to be included as optimization variables. There exist many fuel cycle simulators built with many different combinations of mod ix eling choices and assumptions. This makes comparing results from them difficult. The flexibility of Cyclus makes it a rich playground for comparing the effects of such modeling choices in a consistent way. Effects such as reactor refueling cycle synchronization, inter-facility competition, on-hand inventory requirements, and others are compared in four fuel cycle scenarios each using combinations of fleet or individually modeled reactors with 1-month or 3-month long time steps. There are noticeable differences in results from the different cases. The largest differences are seen during periods of constrained fuel availability for reactors. Research into the effects of modeling choices such as these can help improve the quality and consistency of fuel cycle analysis codes in addition to increasing confidence in the utility of fuel cycle analysis generally.
    @phdthesis{carlsen_advanced_2016,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Advanced {Nuclear} {Fuel} {Cycle} {Transitions}: {Optimization}, {Modeling} {Choices}, and {Disruptions}},
    	url = {https://search.library.wisc.edu/digital/ARXV7VRVTZ2BCW8I},
    	abstract = {Nuclear fuel cycle analysis is a field focused on understanding and modeling the nuclear industry and ecosystem at a macroscopic level. To date, fuel cycle analysis has mostly involved hand-crafting details of fuel cycle scenarios for investigation. Many different tools have evolved over time to help address the need to investigate both the equilibrium properties of nuclear fuel cycles and the dynamics of transitions between them. There is great potential for computational resources to improve both the quality of answers and the size of questions that can be asked. Cyclus is one of the first nuclear fuel cycle simulators to strongly accommodate larger-scale analysis with its free availability, liberal open-source licensing, and first-class Linux support. Cyclus also provides features that uniquely enable investigating the effects of modeling choices and modeling fidelity within fuel cycle scenarios. This is made possible by the complementary nature of Cyclus’ dynamic resource exchange and plugin based architecture. This work is divided into three major pieces focusing on optimization, investigating effects of modeling choices, and dealing with uncertainty.
    
    Effective optimization techniques are developed for automatically determining desirable facility deployment schedules for fuel cycle scenarios with Cyclus. A novel method for mapping optimization variables to deployment schedules is developed. This method allows relationships between reactor types and power capacity constraints to be represented implicitly in the definition of the optimization variables. This not only enables optimizers without constraint support to be used, but it also prevents wasting computational resources searching through many infeasible deployment schedules. With the simplified constraint handling, optimization can be used to analyze larger problems in addition to providing better solutions generally. The developed methodology also enables the deployed power generation capacity over time and the deployment of non-reactor support facilities to be included as optimization variables.
    
    There exist many fuel cycle simulators built with many different combinations of mod
    
    ix eling choices and assumptions. This makes comparing results from them difficult. The flexibility of Cyclus makes it a rich playground for comparing the effects of such modeling choices in a consistent way. Effects such as reactor refueling cycle synchronization, inter-facility competition, on-hand inventory requirements, and others are compared in four fuel cycle scenarios each using combinations of fleet or individually modeled reactors with 1-month or 3-month long time steps. There are noticeable differences in results from the different cases. The largest differences are seen during periods of constrained fuel availability for reactors. Research into the effects of modeling choices such as these can help improve the quality and consistency of fuel cycle analysis codes in addition to increasing confidence in the utility of fuel cycle analysis generally.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Carlsen, Robert W.},
    	month = mar,
    	year = {2016},
    }
    
  12. Matthew J. Gidden, "An Agent-Based Modeling Framework and Application for the Generic Nuclear Fuel Cycle", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (March 2015)
    Key components of a novel methodology and implementation of an agent-based, dynamic nuclear fuel cycle simulator, Cyclus , are presented. The nuclear fuel cycle is a complex, physics-dependent supply chain. To date, existing dynamic simulators have not treated constrained fuel supply, time-dependent, isotopic-quality based demand, or fuel fungibility particularly well. Utilizing an agent-based methodology that incorporates sophisticated graph theory and operations research techniques can overcome these deficiencies. This work describes a simulation kernel and agents that interact with it, highlighting the Dynamic Resource Exchange (DRE), the supply-demand framework at the heart of the kernel. The key agent-DRE interaction mechanisms are described, which enable complex entity interaction through the use of physics and socio-economic models. The translation of an exchange instance to a variant of the Multicommodity Transportation Problem, which can be solved feasibly or optimally, follows. An extensive investigation of solution performance and fidelity is then presented. Finally, recommendations for future users of Cyclus and the DRE are provided.
    @phdthesis{gidden_agent-based_2015,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {An {Agent}-{Based} {Modeling} {Framework} and {Application} for the {Generic} {Nuclear} {Fuel} {Cycle}},
    	url = {https://depot.library.wisc.edu/repository/fedora/1711.dl:ZAPAY7G76EAKB9E/datastreams/REF/content},
    	abstract = {Key components of a novel methodology and implementation of an agent-based, dynamic nuclear fuel
    cycle simulator,
    Cyclus
    , are presented. The nuclear fuel cycle is a complex, physics-dependent supply
    chain. To date, existing dynamic simulators have not treated constrained fuel supply, time-dependent,
    isotopic-quality based demand, or fuel fungibility particularly well. Utilizing an agent-based methodology
    that incorporates sophisticated graph theory and operations research techniques can overcome these
    deficiencies. This work describes a simulation kernel and agents that interact with it, highlighting the
    Dynamic Resource Exchange (DRE), the supply-demand framework at the heart of the kernel. The key
    agent-DRE interaction mechanisms are described, which enable complex entity interaction through the
    use of physics and socio-economic models. The translation of an exchange instance to a variant of the
    Multicommodity Transportation Problem, which can be solved feasibly or optimally, follows. An extensive
    investigation of solution performance and fidelity is then presented. Finally, recommendations for future
    users of
    Cyclus
    and the DRE are provided.},
    	school = {University of Wisconsin-Madison},
    	author = {Gidden, Matthew J.},
    	month = mar,
    	year = {2015},
    }
    
  13. K. L. Dunn, "Monte Carlo Mesh Tallies based on a Kernel Density Estimator Approach", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2014-08-08 8/8/14)
    Kernel density estimators (KDE) are considered for use with the Monte Carlo transport method as an alternative to conventional methods for solving fixed-source problems on arbitrary 3D input meshes. Since conventional methods produce a piecewise constant approximation, their accuracy can suffer when using coarse meshes to approximate neutron flux distributions with strong gradients. Comparatively, KDE mesh tallies produce point estimates independently of the mesh structure, which means that their values will not change even if the mesh is refined. A new KDE integral-track estimator is introduced in this dissertation for use with mesh tallies. Two input parameters are needed, namely a bandwidth and kernel. The bandwidth is equivalent to choosing mesh cell size, whereas the kernel determines the weight of each contribution with respect to its distance from the calculation point being evaluated. The KDE integral-track estimator is shown to produce more accurate results than the original KDE track length estimator, with no performance penalty, and identical or comparable results to conventional methods. However, unlike conventional methods, KDE mesh tallies can use different bandwidths and kernels to improve accuracy without changing the input mesh. This dissertation also explores the accuracy and efficiency of the KDE integral-track mesh tally in detail. Like other KDE applications, accuracy is highly dependent on the choice of bandwidth. This choice becomes even more important when approximating regions of the neutron flux distribution with high curvature, where changing the bandwidth is much more sensitive. Other factors that affect accuracy include properties of the kernel, and the boundary bias effect for calculation points near external geometrical boundaries. Numerous factors also affect efficiency, with the most significant being the concept of the neighborhood region. The neighborhood region determines how many calculation points are expected to add non-trivial contributions, which depends on node density, bandwidth, kernel, and properties of the track being tallied. The KDE integral-track mesh tally is a promising alternative for solving fixed-source problems on arbitrary 3D input meshes. Producing results at specific points rather than cell-averaged values allows a more accurate representation of the neutron flux distribution to be obtained, especially when coarser meshes are used.
    @phdthesis{dunn_monte_2014,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Monte {Carlo} {Mesh} {Tallies} based on a {Kernel} {Density} {Estimator} {Approach}},
    	url = {https://depot.library.wisc.edu/repository/fedora/1711.dl:OXDMBPODZJERF8A/datastreams/REF/content},
    	abstract = {Kernel density estimators (KDE) are considered for use with the Monte Carlo transport method as an alternative to conventional methods for solving fixed-source problems on arbitrary 3D input meshes.  Since conventional methods produce a piecewise constant approximation, their accuracy can suffer when using coarse meshes to approximate neutron flux distributions with strong gradients.  Comparatively, KDE mesh tallies produce point estimates independently of the mesh structure, which means that their values will not change even if the mesh is refined.
    
    A new KDE integral-track estimator is introduced in this dissertation for use with mesh tallies.  Two input parameters are needed, namely a bandwidth and kernel.  The bandwidth is equivalent to choosing mesh cell size, whereas the kernel determines the weight of each contribution with respect to its distance from the calculation point being evaluated.  The KDE integral-track estimator is shown to produce more accurate results than the original KDE track length estimator, with no performance penalty, and identical or comparable results to conventional methods.  However, unlike conventional methods, KDE mesh tallies can use different bandwidths and kernels to improve accuracy without changing the input mesh.
    
    This dissertation also explores the accuracy and efficiency of the KDE integral-track mesh tally in detail.  Like other KDE applications, accuracy is highly dependent on the choice of bandwidth.  This choice becomes even more important when approximating regions of the neutron flux distribution with high curvature, where changing the bandwidth is much more sensitive.  Other factors that affect accuracy include properties of the kernel, and the boundary bias effect for calculation points near external geometrical boundaries.  Numerous factors also affect efficiency, with the most significant being the concept of the neighborhood region.  The neighborhood region determines how many calculation points are expected to add non-trivial contributions, which depends on node density, bandwidth, kernel, and properties of the track being tallied.
    
    The KDE integral-track mesh tally is a promising alternative for solving fixed-source problems on arbitrary 3D input meshes.  Producing results at specific points rather than cell-averaged values allows a more accurate representation of the neutron flux distribution to be obtained, especially when coarser meshes are used.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Dunn, K. L.},
    	month = aug,
    	year = {2014},
    }
    
  14. Kathryn D. Huff, "An Integrated Used Fuel Disposition and Generic Repository Model for Fuel Cycle Analysis", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (8/16/2013)
    As the United States and other nuclear nations consider alternative fuel cycles and waste disposal options simultaneously, an integrated fuel cycle and generic disposal system analysis tool grows increasingly necessary for informing spent nuclear fuel management policy. The long term performance characteristics of deep geologic disposal concepts are affected by heat and radionuclide release characteristics sensitive to disposal system choices as well as variable spent fuel compositions associated with alternative fuel cycles. Computational tools capable of simulating the dynamic, heterogeneous spent fuel isotopics resulting from alternative nuclear fuel cycles and fuel cycle transition scenarios are, however, lacking in disposal system modeling options. This work has resulted in Cyder , a generic repository software library appropriate for system analysis of potential future fuel cycle deployment scenarios. By emphasizing modularity and speed, Cyder is capable of representing the dominant physics of candidate geologic host media, repository designs, and engineering components. Robust and flexible integration with the Cyclus fuel cycle simulator enables this analysis in the context of fuel cycle options.
    @phdthesis{huff_integrated_2013,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {An {Integrated} {Used} {Fuel} {Disposition} and {Generic} {Repository} {Model} for {Fuel} {Cycle} {Analysis}},
    	url = {https://depot.library.wisc.edu/repository/fedora/1711.dl:Y2ZY2ZRN6GI5K8S/datastreams/REF/content},
    	abstract = {As the United States and other nuclear nations consider alternative fuel cycles and waste disposal options simultaneously, an integrated fuel cycle and generic disposal system analysis tool grows increasingly necessary for informing spent nuclear fuel management policy.  The long term performance characteristics of deep geologic disposal concepts are affected by heat and radionuclide release characteristics sensitive to disposal system choices as well as variable spent fuel compositions associated with alternative fuel cycles. Computational tools capable of simulating the dynamic, heterogeneous spent fuel isotopics resulting from alternative nuclear fuel cycles and fuel cycle transition scenarios are, however, lacking in disposal system modeling options. This work has resulted in Cyder , a generic repository software library appropriate for system analysis of potential future fuel cycle deployment scenarios.   By emphasizing modularity and speed, Cyder is capable of representing the dominant physics of candidate geologic host media, repository designs, and engineering components. Robust and flexible integration with the Cyclus fuel cycle simulator enables this analysis in the context of fuel cycle options.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Huff, Kathryn D.},
    	month = aug,
    	year = {2013},
    }
    
  15. Matthew Klebenow, " Development of a Modern, Modular and Maintainable Transmutation Solver", BS Engineering Physics, University of Wisconsin-Madison, (7/25/2013)
    @phdthesis{klebenow_development_2013,
    	address = {Madison, WI, United States},
    	type = {{BS} {Engineering} {Physics}},
    	title = {Development of a {Modern}, {Modular} and {Maintainable} {Transmutation} {Solver}},
    	school = {University of Wisconsin-Madison},
    	author = {Klebenow, Matthew},
    	month = jul,
    	year = {2013},
    }
    
  16. Eric Relson, " Improved Methods For Sampling Mesh-Based Volumetric Sources In Monte Carlo Transport", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (July 2013)
    This research focuses on developing mesh-based techniques for sampling distributed, volumetric sources in Monte Carlo particle transport codes, such as MCNP. This work culminated in several source sampling techniques being implemented within a 3-D neutron activation workflow. The most significant development is the implementation of an efficient voxel sampling technique. Voxel sampling can be applied to source meshes with any number of mesh elements thanks to efficient sampling via the alias method, and meshing of non-source volumes can be avoided. Voxel sampling in turn enables straight-forward implementation of source biasing for variance reduction, and also the use of unstructured source meshes using tetrahedral mesh elements. The uniform sampling technique used in past work is effectively a biasing scheme, and thus can be implemented more efficiently with biased voxel sampling. For this work, the source meshes are inherited from neutron mesh tallies. Cartesian structured meshes, which provide straight-forward compatibility with legacy tools can be sampled with either the voxel or uniform sampling methods. Alternately, using an unstructured mesh (via the unstructured mesh tally capabilities in DAG-MCNP) allows for better conforming meshes – particularly with geometries that do not align well with a structured mesh, or where the source region is spread out through a region of non-source materials, such as systems of pipes. The set of source sampling techniques is useful as a toolkit for obtaining quality answers from a variety of scenarios. This thesis supplements methods development and implementation with experiments to identify and understand which sampling techniques should be used in different scenarios. The new sampling methods and workflows are shown to be in good agreement with results from older methods. While there remain several aspects of the new methods’ behavior to characterize, voxel sampling and its derivatives have fully replaced older sampling methods in neutron activation analysis work at UW-Madison.
    @phdthesis{relson_improved_2013,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Improved {Methods} {For} {Sampling} {Mesh}-{Based} {Volumetric} {Sources} {In} {Monte} {Carlo} {Transport}},
    	abstract = {This research focuses on developing mesh-based techniques for sampling distributed,
     volumetric sources in Monte Carlo particle transport codes, such as MCNP. This work
     culminated in several source sampling techniques being implemented within a 3-D neutron
     activation workflow.
    
    The most significant development is the implementation of an efficient voxel sampling
     technique. Voxel sampling can be applied to source meshes with any number of mesh
     elements thanks to efficient sampling via the alias method, and meshing of non-source volumes
     can be avoided. Voxel sampling in turn enables straight-forward implementation of source
     biasing for variance reduction, and also the use of unstructured source meshes using
     tetrahedral mesh elements. The uniform sampling technique used in past work is effectively a
     biasing scheme, and thus can be implemented more efficiently with biased voxel sampling.
    
    For this work, the source meshes are inherited from neutron mesh tallies. Cartesian
     structured meshes, which provide straight-forward compatibility with legacy tools can be
     sampled with either the voxel or uniform sampling methods. Alternately, using an unstructured
     mesh (via the unstructured mesh tally capabilities in DAG-MCNP) allows for better conforming
     meshes – particularly with geometries that do not align well with a structured mesh, or where
     the source region is spread out through a region of non-source materials, such as systems of
     pipes.
    
    The set of source sampling techniques is useful as a toolkit for obtaining quality answers
     from a variety of scenarios. This thesis supplements methods development and
     implementation with experiments to identify and understand which sampling techniques
     should be used in different scenarios. The new sampling methods and workflows are shown to
     be in good agreement with results from older methods. While there remain several aspects of
     the new methods’ behavior to characterize, voxel sampling and its derivatives have fully
     replaced older sampling methods in neutron activation analysis work at UW-Madison.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Relson, Eric},
    	month = jul,
    	year = {2013},
    }
    
  17. Ahmad Ibrahim, "Automatic Mesh Adaptivity for Hybrid Monte Carlo/Deterministic Neutronics Modeling of Difficult Shielding Problems", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (June, 2012)
    Over the last decade, the role of neutronics modeling has been shifting from analysis of each component separately to high fidelity, full-scale analysis of the nuclear systems entire domains. The high accuracy, associated with minimizing modeling approximations and including more physical and geometric details, is now feasible because of advancements in computing hardware and development of efficient modeling methods. The hybrid Monte Carlo/deterministic techniques, CADIS and FW-CADIS dramatically increase the efficiency of neutronics modeling, but their use in the design of large and geometrically complex nuclear systems is restricted by the availability of computing resources for their preliminarily deterministic calculations and the large computer memory requirements of their final Monte Carlo calculations. To reduce the computational time and memory requirements of the hybrid Monte Carlo/deterministic techniques while maintaining their efficiency improvements, three automatic mesh adaptivity algorithms were developed and added to the Oak Ridge National Laboratory AutomateD VAriaNce reducTion Generator (ADVANTG) code. First, a mixed-material approach, which we refer to as the macromaterial approach, enhances the fidelity of the deterministic models without having to refine the mesh of the deterministic calculations. Second, a deterministic mesh refinement algorithm improves the accuracy of structured mesh deterministic calculations by capturing as much geometric detail as possible without exceeding the total number of mesh elements that is usually determined by the availability of computing resources. Finally, a weight window coarsening algorithm decouples the weight window mesh from the mesh of the deterministic calculations to remove the memory constraint of the weight window map from the deterministic mesh resolution. ii To analyze the combined effect of the three algorithms developed in this thesis, they were used to calculate the prompt dose rate throughout the entire ITER experimental facility. This calculation represents a very challenging shielding problem because of the immense size and complexity of the ITER structure and the presence of a two meter thick biological shield. Compared to a FW-CADIS calculation with the same storage size of the variance reduction parameters, the use of the three algorithms resulted in a 23.3% increase in the regions where the dose rate results are achieved in a 10 day Monte Carlo calculation and increased the efficiency of the Monte Carlo simulation by a factor of 3.4. Because of this significant increase in the Monte Carlo efficiency which was not accompanied by an increase in the memory requirements, the use of the three algorithms in FW-CADIS simulations enabled the simulation of this difficult shielding problem on a regular computer cluster using parallel processing of Monte Carlo calculations. The results of the parallel Monte Carlo calculation agreed at four points with a very fine mesh deterministic calculation that was performed on the super-computer, Jaguar.
    @phdthesis{ibrahim_automatic_2012,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Automatic {Mesh} {Adaptivity} for {Hybrid} {Monte} {Carlo}/{Deterministic} {Neutronics} {Modeling} of {Difficult} {Shielding} {Problems}},
    	url = {http://depot.library.wisc.edu/repository/fedora/1711.dl:GFPDJ3G2URTCL9D/datastreams/REF/content},
    	abstract = {Over the last decade, the role of neutronics modeling has been shifting from analysis of
    each component separately to high fidelity, full-scale analysis of the nuclear systems entire
    domains. The high accuracy, associated with minimizing modeling approximations and including
    more physical and geometric details, is now feasible because of advancements in computing
    hardware and development of efficient modeling methods. The hybrid Monte Carlo/deterministic
    techniques, CADIS and FW-CADIS dramatically increase the efficiency of neutronics modeling,
    but their use in the design of large and geometrically complex nuclear systems is restricted by the
    availability of computing resources for their preliminarily deterministic calculations and the
    large computer memory requirements of their final Monte Carlo calculations.
    To reduce the computational time and memory requirements of the hybrid Monte
    Carlo/deterministic techniques while maintaining their efficiency improvements, three automatic
    mesh adaptivity algorithms were developed and added to the Oak Ridge National Laboratory
    AutomateD VAriaNce reducTion Generator (ADVANTG) code. First, a mixed-material
    approach, which we refer to as the macromaterial approach, enhances the fidelity of the
    deterministic models without having to refine the mesh of the deterministic calculations. Second,
    a deterministic mesh refinement algorithm improves the accuracy of structured mesh
    deterministic calculations by capturing as much geometric detail as possible without exceeding
    the total number of mesh elements that is usually determined by the availability of computing
    resources. Finally, a weight window coarsening algorithm decouples the weight window mesh
    from the mesh of the deterministic calculations to remove the memory constraint of the weight
    window map from the deterministic mesh resolution.
    ii
    To analyze the combined effect of the three algorithms developed in this thesis, they were
    used to calculate the prompt dose rate throughout the entire ITER experimental facility. This
    calculation represents a very challenging shielding problem because of the immense size and
    complexity of the ITER structure and the presence of a two meter thick biological shield.
    Compared to a FW-CADIS calculation with the same storage size of the variance reduction
    parameters, the use of the three algorithms resulted in a 23.3\% increase in the regions where the
    dose rate results are achieved in a 10 day Monte Carlo calculation and increased the efficiency of
    the Monte Carlo simulation by a factor of 3.4. Because of this significant increase in the Monte
    Carlo efficiency which was not accompanied by an increase in the memory requirements, the use
    of the three algorithms in FW-CADIS simulations enabled the simulation of this difficult
    shielding problem on a regular computer cluster using parallel processing of Monte Carlo
    calculations. The results of the parallel Monte Carlo calculation agreed at four points with a very
    fine mesh deterministic calculation that was performed on the super-computer, Jaguar.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Ibrahim, Ahmad},
    	month = jun,
    	year = {2012},
    }
    
  18. Rachel N. Slaybaugh, " Acceleration Methods for Massively Parallel Deterministic Transport", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (November 2011)
    To enhance and improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved in a reasonable amount of time. Computing such fluxes accurately and efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration wrapped around Gauss Seidel for eigenvalue problems, both of which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of leadership-class computers. The first method is a multigroup Krylov solver that improves convergence when compared to Gauss Seidel and parallelizes the code in energy. Tests show that the multigroup Krylov solver can substantially outperform Gauss Seidel in challenging problems. The energy decomposition added by the solver allows Denovo to solve problems on hundreds of thousands of cores. The second method is Rayleigh quotient iteration (RQI), an old method being applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way, and theory indicates that RQI should converge in fewer iterations than the traditional power iteration. RQI creates an energy-block-dense system that would be difficult for Gauss Seidel to solve. The new Krylov solver treats this kind of system very efficiently and RQI would not be a good choice without it. However, RQI creates poorly conditioned systems such that the method is only useful in very simple problems. Preconditioning can alleviate this concern. The final method is a multigrid in energy, physics-based preconditioner. Because the grids are in energy rather than space or angle, the preconditioner can easily and efficiently take advantage of the new energy decomposition. The new preconditioner was very effective at reducing multigroup iteration count for many types of problems. In some cases it also reduced eigenvalue iteration count. The application of the preconditioner allowed RQI to be successful for problems it could not solve otherwise. The preconditioner also scaled very well in energy, and was tested on up to 200,000 cores using a full-facility pressurized water reactor. The three methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy pre-conditioner being particularly successful on their own. For “grand challenge” eigenvalue problems, though, the largest benefit comes from using these methods in concert.
    @phdthesis{slaybaugh_acceleration_2011,
    	address = {Madison, WI},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Acceleration {Methods} for {Massively} {Parallel} {Deterministic} {Transport}},
    	abstract = {To enhance and improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved in a reasonable amount of time. Computing such fluxes accurately and efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration wrapped around Gauss Seidel for eigenvalue problems, both of which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios.
    
    Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of leadership-class computers. The first method is a multigroup Krylov solver that improves convergence when compared to Gauss Seidel and parallelizes the code in energy. Tests show that the multigroup Krylov solver can substantially outperform Gauss Seidel in challenging problems. The energy decomposition added by the solver allows Denovo to solve problems on hundreds of thousands of cores.
    
    The second method is Rayleigh quotient iteration (RQI), an old method being applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way, and theory indicates that RQI should converge in fewer iterations than the traditional power iteration. RQI creates an energy-block-dense system that would be difficult for Gauss Seidel to solve. The new Krylov solver treats this kind of system very efficiently and RQI would not be a good choice without it. However, RQI creates poorly conditioned systems such that the method is only useful in very simple problems. Preconditioning can alleviate this concern.
    
    The final method is a multigrid in energy, physics-based preconditioner. Because the grids are in energy rather than space or angle, the preconditioner can easily and efficiently take advantage of the new energy decomposition. The new preconditioner was very effective at reducing multigroup iteration count for many types of problems. In some cases it also reduced eigenvalue iteration count. The application of the preconditioner allowed RQI to be successful for problems it could not solve otherwise. The preconditioner also scaled very well in energy, and was tested on up to 200,000 cores using a full-facility pressurized water reactor.
    
    The three methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy pre-conditioner being particularly successful on their own. For “grand challenge” eigenvalue problems, though, the largest benefit comes from using these methods in concert.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Slaybaugh, Rachel N.},
    	month = nov,
    	year = {2011},
    	keywords = {Prelim},
    }
    
  19. Patrick Snouffer, " Validation and Verification of Direct Accelerated Geometry Monte Carlo", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (August 2011)
    As both Monte Carlo radiation transport codes and 3D CAD modeling become more widely used, there has been an increasing number of efforts to combine these tools. One such effort is the Direct Accelerated Geometry Monte Carlo (DAGMC) software package being developed at the University of Wisconsin-Madison . DAGMC performs the particle tracking needed for Monte Carlo radiation transport code directly on CAD geometries. DAGMC has been in development for a number of years and is in need of validation and verification in order to build user confidence in DAGMC's reliability and accuracy. This work performs extensive testing of DAGMC implemented with the radiation transport code Monte Carlo N-Particle 5 (DAG-MCNP5). Four tests suites have been compiled for DAG-MCNP5 to ensure the accuracy of the code now and for future developers. These test suites are based largely on the test suites for MCNP5 and include: a suite of 80 regression tests, a suite of 75 verification tests, a suite of 30 validation criticality tests, and a suite of 19 validation shielding tests. These tests encompass a wide range of geometries, materials, and physics to test almost all of the features of DAG-MCNP5. The results of these test were compared to both analytical and experimental results, where appropriate, and MCNP5 results. A faceting tolerance study was also performed for many of these test. It was found that a faceting tolerance of not large than 10-4 cm produces statistically similar results to MCNP5 on a consistent basis for all problem types. It is concluded that DAG-MCNP5 performs as accurately as MCNP5 for these test problems, and that DAG-MCNP5 can be considered a reliable neutronics code.
    @phdthesis{snouffer_validation_2011,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Validation and {Verification} of {Direct} {Accelerated} {Geometry} {Monte} {Carlo}},
    	abstract = {As both Monte Carlo radiation transport codes and 3D CAD modeling become more widely used, there has been an increasing number of efforts to combine these tools. One such effort is the Direct Accelerated Geometry Monte Carlo (DAGMC) software package being developed at the University of Wisconsin-Madison . DAGMC performs the particle tracking needed for
    Monte Carlo radiation transport code directly on CAD geometries. DAGMC has been in development for a number of years and is in need of validation and verification in order to build user confidence in DAGMC's reliability and accuracy.
    
    This work performs extensive testing of DAGMC implemented with the radiation transport code Monte Carlo N-Particle 5 (DAG-MCNP5). Four tests suites have been compiled for DAG-MCNP5 to ensure the accuracy of the code now and for future developers. These test suites are based largely on the test suites for MCNP5 and include: a suite of 80 regression tests, a suite of 75 verification tests, a suite of 30 validation criticality tests, and a suite of 19 validation shielding tests. These tests encompass a wide range of geometries, materials, and physics to test almost all of the features of DAG-MCNP5. The results of these test were compared to both analytical and experimental results, where appropriate, and MCNP5 results. A faceting tolerance study
    was also performed for many of these test. It was found that a faceting tolerance of not large than 10-4 cm produces statistically similar results to MCNP5 on a consistent basis for all problem types. It is concluded that DAG-MCNP5 performs as accurately as MCNP5 for these test problems, and that DAG-MCNP5 can be considered a reliable neutronics code.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Snouffer, Patrick},
    	month = aug,
    	year = {2011},
    }
    
  20. Marina Arabidze, " Natural Gas Sector in Georgia: Challenges and Options for Security", MS Environment and Resources, University of Wisconsin-Madison, (May 2011)
    @phdthesis{arabidze_natural_2011,
    	address = {Madison, WI, United States},
    	type = {{MS} {Environment} and {Resources}},
    	title = {Natural {Gas} {Sector} in {Georgia}: {Challenges} and {Options} for {Security}},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Arabidze, Marina},
    	month = may,
    	year = {2011},
    }
    
  21. Erik Nygaard, " ", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2011)
    @phdthesis{nygaard_notitle_2011,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	school = {University of Wisconsin-Madison},
    	author = {Nygaard, Erik},
    	year = {2011},
    }
    
  22. Damien Moule, " Sampling Material Composition of CAD Geometries", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2011)
    @phdthesis{moule_sampling_2011,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Sampling {Material} {Composition} of {CAD} {Geometries}},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Moule, Damien},
    	year = {2011},
    }
    
  23. Brandon M. Smith, " Robust Tracking and Advanced Geometry for Monte Carlo Radiation Transport", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2011)
    A set of improved geometric capabilities are developed for the Direct Accelerated Geometry for Monte Carlo (DAGMC) library to increase its ease of use and accuracy. The improvements are watertight faceting, robust particle tracking, automatic creation of nonsolid space, and overlap tolerance. Before being sealed, adjacent faceted surfaces do not have the same discretization along shared curves. Sealing together surfaces to create a watertight faceting prevents leakage of particles between surfaces. The tracking algorithm is made robust by ensuring numerical consistency and avoiding geometric tolerances. Monte Carlo simulation requires all space to be defined, whether it be vacuum, air, coolant, or a solid material. The implicit creation of nonsolid space reduces human effort otherwise required to explicitly create nonsolid space in a CAD program. CAD models often contain small gaps and overlaps between adjacent volumes due to imprecise modeling, file translation, or intentional deformation. Although gaps are filled by the implicit creation of nonsolid space, overlaps cause geometric queries to become unreliable. The particle tracking algorithm and point inclusion test are modified to tolerate small overlaps of adjacent volumes. Overlap-tolerant particle tracking eliminates manual repair of CAD models and enables analysis of meshed finite element models undergoing structural deformation. These improvements are implemented in a coupling of DAGMC with the Monte Carlo N-Particle (MCNP) code, known as DAG-MCNP. The elimination of both manual CAD repair and lost particles are demonstrated with CAD models used in production calculations.
    @phdthesis{smith_robust_2011,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Robust {Tracking} and {Advanced} {Geometry} for {Monte} {Carlo} {Radiation} {Transport}},
    	abstract = {A set of improved geometric capabilities are developed for the Direct Accelerated Geometry for Monte Carlo (DAGMC) library to increase its ease of use and accuracy. The improvements are watertight faceting, robust particle tracking, automatic creation of nonsolid space, and overlap tolerance. Before being sealed, adjacent faceted surfaces do not have the same discretization along shared curves. Sealing together surfaces to create a watertight faceting prevents leakage of particles between surfaces. The tracking algorithm is made robust by ensuring numerical consistency and avoiding geometric tolerances. Monte Carlo simulation requires all space to be defined, whether it be vacuum, air, coolant, or a solid material. The implicit creation of nonsolid space reduces human effort otherwise required to explicitly create nonsolid space in a CAD program. CAD models often contain small gaps and overlaps between adjacent volumes due to imprecise modeling, file translation, or intentional deformation. Although gaps are filled by the implicit creation of nonsolid space, overlaps cause geometric queries to become unreliable. The particle tracking algorithm and point inclusion test are modified to tolerate small overlaps of adjacent volumes. Overlap-tolerant particle tracking eliminates manual repair of CAD models and enables analysis of meshed finite element models undergoing structural deformation. These improvements are implemented in a coupling of DAGMC with the Monte Carlo N-Particle (MCNP) code, known as DAG-MCNP. The elimination of both manual CAD repair and lost particles are demonstrated with CAD models used in production calculations.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Smith, Brandon M.},
    	year = {2011},
    }
    
  24. Tae Wook Ahn, " Discrete and Multi-Region Economic Modeling of a Global Nuclear Fuel Cycle on GENIUSv2", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2010)
    Globalization of nuclear technologies and developments on advanced fuel cycles have been steadily growing. This stimulated the development of various systems analysis tools for nuclear fuel cycle simulations. There have also been particular interests on global-scale economic models that can capture different economic, political, social, and technical impacts on the nuclear fuel cycle. In support of the Advanced Fuel Cycle Initiatives and the Global Nuclear Energy Partnership, the Simulation Institute for Nuclear Energy Modeling and Analysis (SINEMA) project attempted to develop an integrated approach that can interconnect different fuel cycle analysis tools. The Global Evaluation of Nuclear Infrastructure Utilization Scenarios (GENIUS) was originally developed to be the nuclear enterprise model in the SINEMA project for policy and decision makers. Currently, its successor, GENIUSv2, is being developed at the University of Wisconsin-Madison. This thesis implements an economic model that uses the capabilities of GENIUSv2 to calculate streams of monthly costs of various global-scale nuclear fuel cycle scenarios. Discrete monthly calculations allow the user to capture economic impacts of any variations that occur over time. The region-institution-facility hierarchical modeling capability in GENIUSv2 provides the user flexibility to observe economic impacts at facility, institutional, and regional levels. The economic impacts at a facility level were observed by implementing user-defined unplanned outages throughout the lifetime of the reactor. The economic impacts at institutional and regional levels were also observed by imposing political and/or technical trade disruptions between facilities.
    @phdthesis{ahn_discrete_2010,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Discrete and {Multi}-{Region} {Economic} {Modeling} of a {Global} {Nuclear} {Fuel} {Cycle} on {GENIUSv2}},
    	abstract = {Globalization of nuclear technologies and developments on advanced fuel cycles have been steadily growing. This stimulated the development of various systems
     analysis tools for nuclear fuel cycle simulations. There have also been particular
     interests on global-scale economic models that can capture different economic,
     political, social, and technical impacts on the nuclear fuel cycle. In support of the
    Advanced Fuel Cycle Initiatives and the Global Nuclear Energy Partnership, the
     Simulation Institute for Nuclear Energy Modeling and Analysis (SINEMA) project
     attempted to develop an integrated approach that can interconnect different fuel
     cycle analysis tools. The Global Evaluation of Nuclear Infrastructure Utilization
     Scenarios (GENIUS) was originally developed to be the nuclear enterprise model in
     the SINEMA project for policy and decision makers. Currently, its successor,
     GENIUSv2, is being developed at the University of Wisconsin-Madison. This thesis
     implements an economic model that uses the capabilities of GENIUSv2 to calculate
     streams of monthly costs of various global-scale nuclear fuel cycle scenarios.
     Discrete monthly calculations allow the user to capture economic impacts of any
     variations that occur over time. The region-institution-facility hierarchical modeling
     capability in GENIUSv2 provides the user flexibility to observe economic impacts at
     facility, institutional, and regional levels. The economic impacts at a facility level
     were observed by implementing user-defined unplanned outages throughout the
     lifetime of the reactor. The economic impacts at institutional and regional levels
     were also observed by imposing political and/or technical trade disruptions
     between facilities.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Ahn, Tae Wook},
    	year = {2010},
    }
    
  25. Brian C. Kiedrowski, " Adjoint Weighting for Continuous-Energy Monte Carlo Radiation Transport", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2009)
    Methods are developed for importance or adjoint weighting of individual tally scores within a continuous-energy k-eigenvalue Monte Carlo calculation. These adjoint-weighted tallies allow for the calculation of certain quantities important to understanding the physics of a nuclear reactor. The methods, unlike traditional approaches to computing adjoint-weighted quantities, do not attempt to invert the random walk. Rather, they are based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. This can be calculated in a strictly forward calculation, and this factor can be applied to previously computed tally scores. These methods are implemented in a production Monte Carlo code and are used to calculate parameters requiring adjoint weighting, the point reactor kinetics parameters and reactivity changes based upon first-order perturbation theory. The results of these calculations are compared against experimental measurements, equivalent discrete ordinates calculations, or other Monte Carlo based techniques.
    @phdthesis{kiedrowski_adjoint_2009,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Adjoint {Weighting} for {Continuous}-{Energy} {Monte} {Carlo} {Radiation} {Transport}},
    	abstract = {Methods are developed for importance or adjoint weighting of individual tally scores within a continuous-energy k-eigenvalue Monte Carlo calculation. These adjoint-weighted tallies allow for the calculation of certain quantities important to understanding the physics of a nuclear reactor.
    
    The methods, unlike traditional approaches to computing adjoint-weighted quantities, do not attempt to invert the random walk. Rather, they are based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. This can be calculated in a strictly forward calculation, and this factor can be applied to previously computed tally scores.
    
    These methods are implemented in a production Monte Carlo code and are used to calculate parameters requiring adjoint weighting, the point reactor kinetics parameters and reactivity changes based upon first-order perturbation theory. The results of these calculations are compared against experimental measurements, equivalent discrete ordinates calculations, or other Monte Carlo based techniques.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Kiedrowski, Brian C.},
    	year = {2009},
    }
    
  26. Kyle M. Oliver, " GENIUSv2: Software Design and Mathematical Formulations for Multi-Region Discrete Nuclear Fuel Cycle Simulation and Analysis", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2009)
    @phdthesis{oliver_geniusv2_2009,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {{GENIUSv2}: {Software} {Design} and {Mathematical} {Formulations} for {Multi}-{Region} {Discrete}  {Nuclear} {Fuel} {Cycle} {Simulation} and {Analysis}},
    	school = {University of Wisconsin-Madison},
    	author = {Oliver, Kyle M.},
    	year = {2009},
    }
    
  27. Jeremy Roberts, " Further Interpretation Of Sensitivity Data In Support Of Burnup Credit", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2009)
    @phdthesis{roberts_further_2009,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Further {Interpretation} {Of} {Sensitivity} {Data} {In} {Support} {Of} {Burnup} {Credit}},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Roberts, Jeremy},
    	year = {2009},
    }
    
  28. Andrew Scholbrock, " Attribute Management in ACIS Based Geometry Files", BS Engineering Physics, University of Wisconsin-Madison, (2008)
    Computer aided design provides for a means to represent physical quantities in a computer as well as the concepts related to it in order to provide an efficient design process. Using computer simulation over physical tests allow for quicker and cheaper results. However much of the potential that computer aided design has to offer is not being utilized due to the cumbersome interfaces that currently stand between engineers and computers. Specifically when dealing with attributes (labels that help define physical properties of the geometric representation) current geometry tools do not provide means to adapt attributes as needed in the simulation based design process. This research looks into creating a tool to apply and manipulate attributes on geometric entities while providing an efficient means for a user to interact with a geometric model.
    @phdthesis{scholbrock_attribute_2008,
    	address = {Madison, WI, United States},
    	type = {{BS} {Engineering} {Physics}},
    	title = {Attribute {Management} in {ACIS} {Based} {Geometry} {Files}},
    	abstract = {Computer aided design provides for a means to represent physical quantities in a computer as well as the concepts related to it in order to provide an efficient design process.  Using computer simulation over physical tests allow for quicker and cheaper results. However much of the potential that computer aided design has to offer is not being utilized due to the cumbersome interfaces that currently stand between engineers and computers. Specifically when dealing with attributes (labels that help define physical properties of the geometric representation) current geometry tools do not provide means to adapt attributes as needed in the simulation based design process. This research looks into creating a tool to apply and manipulate attributes on geometric entities while providing an efficient means for a user to interact with a geometric model.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Scholbrock, Andrew},
    	year = {2008},
    }
    
  29. Michael Priaulx, " Development of a PARCS/HELIOS Model for the University of Wisconsin Nuclear Reactor", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2008)
    @phdthesis{priaulx_development_2008,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Development of a {PARCS}/{HELIOS} {Model} for the {University} of {Wisconsin} {Nuclear} {Reactor}},
    	school = {University of Wisconsin-Madison},
    	author = {Priaulx, Michael},
    	year = {2008},
    }
    
  30. Ryan Grady, " Development of Economic Accounting for Nuclear Waste in Fuel Cycle Analysis", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2008)
    This research focuses on the development of an economic model for waste entering a repository. This work couples a repository loading model with an economic accounting system to determine a cost based on repository usage. The repository loading model determines the amount of repository space used by an arbitrary waste stream. Using the economic model in VISION.econ, the arbitrary waste stream can be assigned a cost. The cost for the space used is calibrated by computing the cost per meter of repository space if spent fuel is directly emplaced. This allows for accurate comparison between direct disposal and different recycling schemes. The length-based disposal cost accounts for fuel from different fuel types, burnups, and High-Level Waste (HLW) with an arbitrary isotope mix. Key derivatives of this work are an accounting system that can account for the repository savings of reprocessing and the ability to compare direct disposal to reprocessing with varying separation schemes. From this work, it was determined that the current mass-based accounting system for HLW disposal costs can be significantly different than the length-based accounting system proposed in this work when advanced reprocessing schemes are implemented. Furthermore, this work shows the length-based accounting system may be needed to find the disposal cost at which reprocessing is economically equivalent to direct disposal.
    @phdthesis{grady_development_2008,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Development of {Economic} {Accounting} for {Nuclear} {Waste} in {Fuel} {Cycle} {Analysis}},
    	abstract = {This research focuses on the development of an economic model for waste
     entering a repository. This work couples a repository loading model with an economic
     accounting system to determine a cost based on repository usage.
     
    
    The repository loading model determines the amount of repository space used by
     an arbitrary waste stream. Using the economic model in VISION.econ, the arbitrary
     waste stream can be assigned a cost. The cost for the space used is calibrated by
     computing the cost per meter of repository space if spent fuel is directly emplaced. This
     allows for accurate comparison between direct disposal and different recycling schemes.
     The length-based disposal cost accounts for fuel from different fuel types, burnups, and
     High-Level Waste (HLW) with an arbitrary isotope mix.
    
    Key derivatives of this work are an accounting system that can account for the
     repository savings of reprocessing and the ability to compare direct disposal to
     reprocessing with varying separation schemes. From this work, it was determined that the
     current mass-based accounting system for HLW disposal costs can be significantly
     different than the length-based accounting system proposed in this work when advanced
     reprocessing schemes are implemented. Furthermore, this work shows the length-based
     accounting system may be needed to find the disposal cost at which reprocessing is
    economically equivalent to direct disposal.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Grady, Ryan},
    	year = {2008},
    }
    
  31. Po Hu, " Coupled Neutronics/Thermal-hydraulics Analyses of Supercritical Water Reactor", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2008)
    The Supercritical Water Reactor (SCWR) is a next generation nuclear reactor concept well known for its system simplification and high thermal efficiency. The current study develops analysis capability for the U.S. reference design by extending existing LWR analysis codes and study SCWR behaviors under steady state, burnup and transient conditions. An extended version of PARCS that can analyze SCWR in steady state is developed first. The modified code is used to demonstrate the importance of moderator heating on the neutronics behavior of U.S. SCWR design by simulating an infinite lattice of assemblies. The results show that the moderator heating leads to a more symmetric effective moderator density, and has a significant impact on the axial power shape. From this, sensitivity calculations are performed to show how the assembly performs with perturbations in assembly power, mass flow rate, bypass ratio and heat transfer coefficient. In order to study transient and flow distribution between assemblies in SCWR, a coupled PARCS/RELAP5 code package specialized for current SCWR design is developed in this study. The variable mapping input file and related subroutines in PARCS are modified to transfer the physical properties of coolant and moderator separately between the coupled codes, and necessary code modification is also done in PARCS to automatically perform neutronics feedback based on not only fuel and coolant but also moderator physical properties. A finer data grid in the RELAP5 water table is adopted above the supercritical point to enable the thermal-hydrodynamics simulation in this range. A whole SCWR core model for the coupled PARCS/RELAP5 is established for this study and used in the rest of the study. Flow reversal in downward flowing moderator channels is discovered in steady state. It is due to the positive flow rate feedback to flow density change necessary for pressure balance. Choosing different orifice sizes based on corresponding assembly powers can prevent the reversal. The comparison of results from the coupled simulations with/without flow reversal shows that the reversed moderator flow introduces a large axial power peak at the bottom of the core and reduces the core reactivity. A burnup calculation shows that under the current design parameters the reactor cannot sustain criticality for one year, therefore further investigation on burnup is needed. A possible moderator reversal is found during the burnup calculation suggesting that the change in core axial power distribution during burnup should be considered while designing the various orifice sizes to prevent the reversal. A SCWR system model is developed which adds balance of the plant to the core model. Three transients are studied: loss of feedwater, loss of off-site power and loss of turbine load without scram. The results show that the maximum cladding surface temperatures satisfy the material limit. The location of maximum cladding surface temperature is not in the maximum power assembly. This suggests the normal hot channel analysis method may not applicable to SCWR. Future work on sub-channel analysis, achieving full cycle burnup and more safety analyses is proposed.
    @phdthesis{hu_coupled_2008,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Coupled {Neutronics}/{Thermal}-hydraulics {Analyses} of {Supercritical} {Water} {Reactor}},
    	abstract = {The Supercritical Water Reactor (SCWR) is a next generation nuclear reactor concept well known for its system simplification and high thermal efficiency. The current study develops analysis capability for the U.S. reference design by extending existing LWR analysis codes and study SCWR behaviors under steady state, burnup and transient conditions.
    
    An extended version of PARCS that can analyze SCWR in steady state is developed first. The modified code is used to demonstrate the importance of moderator heating on the neutronics behavior of U.S. SCWR design by simulating an infinite lattice of assemblies. The results show that the moderator heating leads to a more symmetric effective moderator density, and has a significant impact on the axial power shape. From this, sensitivity calculations are performed to show how the assembly performs with
    perturbations in assembly power, mass flow rate, bypass ratio and heat transfer coefficient.
    
    In order to study transient and flow distribution between assemblies in SCWR, a coupled PARCS/RELAP5 code package specialized for current SCWR design is developed in this study. The variable mapping input file and related subroutines in PARCS are modified to transfer the physical properties of coolant and moderator separately between the coupled codes, and necessary code modification is also done in
    PARCS to automatically perform neutronics feedback based on not only fuel and coolant but also moderator physical properties. A finer data grid in the RELAP5 water table is adopted above the supercritical point to enable the thermal-hydrodynamics simulation in this range. A whole SCWR core model for the coupled PARCS/RELAP5 is established for this study and used in the rest of the study.
    
    Flow reversal in downward flowing moderator channels is discovered in steady state. It is due to the positive flow rate feedback to flow density change necessary for pressure balance. Choosing different orifice sizes based on corresponding assembly powers can prevent the reversal. The comparison of results from the coupled simulations with/without flow reversal shows that the reversed moderator flow introduces a large axial power peak at the bottom of the core and reduces the core reactivity. 
    
    A burnup calculation shows that under the current design parameters the reactor cannot sustain criticality for one year, therefore further investigation on burnup is needed. A possible moderator reversal is found during the burnup calculation suggesting that the change in core axial power distribution during burnup should be considered while designing the various orifice sizes to prevent the reversal.
    
    A SCWR system model is developed which adds balance of the plant to the core model. Three transients are studied: loss of feedwater, loss of off-site power and loss of turbine load without scram. The results show that the maximum cladding surface temperatures satisfy the material limit. The location of maximum cladding surface temperature is not in the maximum power assembly. This suggests the normal hot channel analysis method may not applicable to SCWR.
    
    Future work on sub-channel analysis, achieving full cycle burnup and more safety analyses is proposed.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Hu, Po},
    	year = {2008},
    }
    
  32. Tracy E. Radel, " Repository Modeling for Fuel Cycle Scenario Analysis", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2007)
    This research is focused on developing a model to determine repository loading for an arbitrary isotopic vector based on thermal heat loads for Yucca Mountain. The model will be implemented into a fuel cycle scenario analysis code to investigate repository benefit of various fuel cycles from an integrated systems perspective. Three limiting temperature cases were previously identified from dose limits on the repository: the drift wall at emplacement and closure must remain below 200 °C and the mid- drift point must remain below 96 °C at all times. Based on a pre-existing detailed thermal model of Yucca Mountain, streamlined models were developed for these limiting cases, each with a functional form that captures the appropriate transient effects. The emplacement limit was dependent on the initial heat load as well as the rate at which the heat load was changing. The closure limit was approximated by a constant heat load limit, as the decay heat does not change rapidly near the time of closure. The model for the mid-drift limit uses superposition of individual isotope contributions to the mid-drift temperature rather than decay heat values. Implementation in the VISION systems analysis code, offers a powerful tool for studying the effects of an intergraded fuel cycle on repository loading values. A complete repository loading model has never been coupled with a fuel cycle systems code. Effects of delays in the fuel cycle, changes in separation processes, variations in reactor combinations, and other dynamic fuel cycle parameters can now be investigated using this model. Results discussed in this paper show that an increase in separation efficiency above 0.2% would have less than a 1% impact on repository loading. However, separation of Cs and Sr into an alternate waste steam results in increased loading of 285 times over a traditional once through cycle for some fuel cycle scenarios. The ability to have a varying time until closure in the systems model also shows a significant impact, reducing the benefit over a once through cycle from 5 times to 2.4 times because of temperature limits at closure.
    @phdthesis{radel_repository_2007,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Repository {Modeling} for {Fuel} {Cycle} {Scenario} {Analysis}},
    	abstract = {This research is focused on developing a model to determine repository loading for an arbitrary isotopic vector based on thermal heat loads for Yucca Mountain. The model will be implemented into a fuel cycle scenario analysis code to investigate repository benefit of various fuel cycles from an integrated systems perspective.
    
    Three limiting temperature cases were previously identified from dose limits on the repository: the drift wall at emplacement and closure must remain below 200 °C and the mid- drift point must remain below 96 °C at all times. Based on a pre-existing detailed thermal model of Yucca Mountain, streamlined models were developed for these limiting cases, each with a functional form that captures the appropriate transient effects. The emplacement limit was dependent on the initial heat load as well as the rate at which the heat load was changing. The closure limit was approximated by a constant heat load limit, as the decay heat does not change rapidly near the time of closure. The model for the mid-drift limit uses superposition of individual isotope contributions to the mid-drift temperature rather than decay heat values.
    
    Implementation in the VISION systems analysis code, offers a powerful tool for studying the effects of an intergraded fuel cycle on repository loading values. A complete repository loading model has never been coupled with a fuel cycle systems code. Effects of delays in the fuel cycle, changes in separation processes, variations in reactor combinations, and other dynamic fuel cycle parameters can now be investigated using this model. 
    
    Results discussed in this paper show that an increase in separation efficiency above 0.2\% would have less than a 1\% impact on repository loading. However, separation of Cs and Sr into an alternate waste steam results in increased loading of 285 times over a traditional once through cycle for some fuel cycle scenarios. The ability to have a varying time until closure in the systems model also shows a significant impact, reducing the benefit over a once through cycle from 5 times to 2.4 times because of temperature limits at closure.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Radel, Tracy E.},
    	year = {2007},
    }
    
  33. Timothy Setter, " Neutron/Gamma Mixed Spectrum Radiolysis-Based Aqueous Dosimetry", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2007)
    This work develops a method to use an aqueous dosimeter in a mixed radiation field to determine separate measurements of neutron and gamma dose. Based on radiolysis of both Fricke and Methyl Viologen (MV) solutions, activation analysis and reactor simulation are combined to determine neutron dose and neutron radiolysis. This is subtracted from the total measured radiolysis to infer a gamma dose. The Fricke dosimeter was able to give repeatable results for the neutron and gamma doses over a number of days for a variety of shielding configurations. Impurities in the MV dosimeter prevented it from providing repeatable results, but qualitative comparison to the Fricke dosimeter indicated that it could be a viable approach. The method found that the reactor simulation, using MCNP5, can be used for accurate neutron simulations but does not account for all the source terms for gamma dose simulation. A neutron G-value for the Fricke dosimeter was developed by combining proton radiolysis simulations with results from MCNP5 and NJOY.
    @phdthesis{setter_neutrongamma_2007,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Neutron/{Gamma} {Mixed} {Spectrum} {Radiolysis}-{Based} {Aqueous} {Dosimetry}},
    	abstract = {This work develops a method to use an aqueous dosimeter in a mixed radiation field to determine separate measurements of neutron and gamma dose. Based on radiolysis of both Fricke and Methyl Viologen (MV) solutions, activation analysis and reactor simulation are combined to determine neutron dose and neutron radiolysis. This is subtracted from the total measured radiolysis to infer a gamma dose. The Fricke dosimeter was able to give repeatable results for the neutron and gamma doses over a number of days for a variety of shielding configurations. Impurities in the MV dosimeter prevented it from providing repeatable results, but qualitative comparison to the Fricke dosimeter indicated that it could be a viable approach. The method found that the reactor simulation, using MCNP5, can be used for accurate neutron simulations but does not account for all the source terms for gamma dose simulation. A neutron G-value for the Fricke dosimeter was developed by combining proton radiolysis simulations with results from MCNP5 and NJOY.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Setter, Timothy},
    	year = {2007},
    }
    
  34. Eric J. Edwards, " Determination of Pure Neutron Radiolysis Yields for use in Chemical Modeling of Supercritical Water", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2007)
    This work has determined pure neutron radical yields at elevated temperature and pressure up to supercritical conditions using a reactor core radiation. The data will be necessary to provides realistic conditions for material corrosion experiments for the supercritical water reactor (SCWR) through water chemistry modeling. The work has been performed at the University of Wisconsin Nuclear Reactor using an apparatus designed to transport supercritical water near the reactor core. Low LET yield data used in the experiment was provided by a similar project at the Notre Dame Radiation Lab. Radicals formed by radiolysis were measured through chemical scavenging reactions. The aqueous electron was measured by two methods, a reaction with N2O to produce molecular nitrogen and a reaction with SF6 to produce fluoride ions. The hydrogen radical was measured through a reaction with ethanol-D6 (CD3CD2OD) to form HD. Molecular hydrogen was measured directly. Gaseous products were measured with a mass spectrometer and ions were measured with an ion selective electrode. Radiation energy deposition was calibrated for neutron and gamma radiation separately with a neutron activation analysis and a radiolysis experiment. Pure neutron yields were calculated by subtracting gamma contribution using the calibrated gamma energy deposition and yield results from work at the Notre Dame Radiation Laboratory. Pure neutron yields have been experimentally determined for aqueous electrons from 25o to 400o C at 248 bar and for the hydrogen radical from 25o C to 350o C at 248 bar. Isothermal data has been acquired for the aqueous electron at 380o C and 400o C as a function of density. Molecular hydrogen yields were measured as a function of temperature and pressure, although there was evidence that chemical reactions with the walls of the water tubing were creating molecular hydrogen in addition to that formed through radiolysis. Critical hydrogen concentration behavior was investigated but a final result was not determined because a measurable oxygen yield was not seen at the outlet of the radiolysis loop.
    @phdthesis{edwards_determination_2007,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Determination of {Pure} {Neutron} {Radiolysis} {Yields} for use in {Chemical} {Modeling} of {Supercritical} {Water}},
    	abstract = {This work has determined pure neutron radical yields at elevated temperature and pressure up to supercritical conditions using a reactor core radiation. The data will be necessary to provides realistic conditions for material corrosion experiments for the supercritical water reactor (SCWR) through water chemistry modeling. The work has been performed at the University of Wisconsin Nuclear Reactor using an apparatus designed to transport supercritical water near the reactor core. Low LET yield data used
    in the experiment was provided by a similar project at the Notre Dame Radiation Lab.
    
    Radicals formed by radiolysis were measured through chemical scavenging reactions. The aqueous electron was measured by two methods, a reaction with N2O to produce molecular nitrogen and a reaction with SF6 to produce fluoride ions. The hydrogen radical was measured through a reaction with ethanol-D6 (CD3CD2OD) to form HD. Molecular hydrogen was measured directly. Gaseous products were measured with a mass spectrometer and ions were measured with an ion selective electrode. Radiation
    energy deposition was calibrated for neutron and gamma radiation separately with a neutron activation analysis and a radiolysis experiment.  Pure neutron yields were calculated by subtracting gamma contribution using the calibrated gamma energy deposition and yield results from work at the Notre Dame Radiation Laboratory.
    
    Pure neutron yields have been experimentally determined for aqueous electrons from 25o to 400o C at 248 bar and for the hydrogen radical from 25o C to 350o C at 248 bar. Isothermal data has been acquired for the aqueous electron at 380o C and 400o C as a function of density. Molecular hydrogen yields were measured as a function of temperature and pressure, although there was evidence that chemical reactions with the walls of the water tubing were creating molecular hydrogen in addition to that formed
    through radiolysis. Critical hydrogen concentration behavior was investigated but a final result was not determined because a measurable oxygen yield was not seen at the outlet of the radiolysis loop.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Edwards, Eric J.},
    	year = {2007},
    }
    
  35. Phiphat Phruksarojanakun, " Monte Carlo Isotopic Inventory Analysis for Complex Nuclear Systems", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2007)
    Monte Carlo Inventory Simulation Engine or MCise is a newly developed method for calculating isotopic inventory of materials. The method offers the promise of modeling materials with complex processes and irradiation histories, which pose challenges for current deterministic tools. Monte Carlo techniques based on following the history of individual atoms allows those atoms to follow randomly determined flow paths, enter or leave the system at arbitrary locations, and be subjected to radiation or chemical processes at different points in the flow path. The method has strong analogies to Monte Carlo neutral particle transport. The fundamental of analog method is fully developed, including considerations for simple, complex and loop flows. The validity of the analog method is demonstrated with test problems under various flow conditions. The method reproduces the results of a deterministic inventory code for comparable problems. While a successful and efficient parallel implementation has permitted an inexpensive way to improve statistical precision by increasing the number of sampled atoms, this approach does not always provide the most efficient avenue for improvement. Therefore, six variance reduction tools are implemented as alternatives to improve precision of Monte Carlo simulations. Forced Reaction is designed to force an atom to undergo a predefined number of reactions in a given irradiation environment. Biased Reaction Branching is primarily focused on improving statistical results of the isotopes that are produced from rare reaction pathways. Biased Source Sampling is aimed at increasing frequencies of sampling rare initial isotopes as the starting particles. Reaction Path Splitting increases the population by splitting the atom at each reaction point, creating one new atom for each decay or transmutation product. Delta Tracking is recommended for a high-frequency pulsing to greatly reduce the computing time. Lastly, Weight Window is introduced as a strategy to decrease large deviations of weight due to the uses of variance reduction techniques. A figure of merit is necessary to evaluate the efficiency of a variance reduction technique. A number of possibilities for the figure of merit are explored, two of which offer robust figures of merit. One figure of merit is based on the relative error of a known target isotope (1/R2 T ) and another on the overall detection limit corrected by the relative error (1/Dk R2 T ). An automated Adaptive Variance-reduction Adjustment (AVA) tool is developed to iteratively define necessary parameters for some variance reduction techniques in a problem with a target isotope. Initial sample problems demonstrate that AVA improves both precision and accuracy of a target result in an efficient manner. Potential applications of MCise include molten salt fueled reactors and liquid breeders in fusion blankets. As an example, the inventory analysis of an actinide fluoride eutectic liquid fuel in the In-Zinerator, a sub-critical power reactor driven by a fusion source, is examined using MCise. The result reassures MCise as a reliable tool for inventory analysis of complex nuclear systems.
    @phdthesis{phruksarojanakun_monte_2007,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Monte {Carlo} {Isotopic} {Inventory} {Analysis} for {Complex} {Nuclear} {Systems}},
    	abstract = {Monte Carlo Inventory Simulation Engine or MCise is a newly developed method for calculating isotopic inventory of materials. The method offers the promise of modeling materials with complex processes and irradiation histories, which pose challenges for current deterministic tools. Monte Carlo techniques based on following the history of individual atoms allows those atoms to follow randomly determined flow paths, enter or leave the system at arbitrary locations, and be subjected to radiation or chemical processes at different points in the flow path.
    
    The method has strong analogies to Monte Carlo neutral particle transport. The fundamental of analog method is fully developed, including considerations for simple, complex and loop flows. The validity of the analog method is demonstrated with test problems under various flow conditions. The method reproduces the results of a deterministic inventory code for
    comparable problems. While a successful and efficient parallel implementation has permitted an inexpensive way to improve statistical precision by increasing the number of sampled atoms, this approach does not always provide the most efficient avenue for improvement. Therefore, six variance reduction tools are implemented as alternatives to improve precision
    of Monte Carlo simulations. Forced Reaction is designed to force an atom to undergo a predefined number of reactions in a given irradiation environment. Biased Reaction Branching is primarily focused on improving statistical results of the isotopes that are produced from rare reaction pathways. Biased Source Sampling is aimed at increasing frequencies of sampling rare initial isotopes as the starting particles. Reaction Path Splitting increases the population by splitting the atom at each reaction point, creating one new atom for each decay or transmutation product. Delta Tracking is recommended for a high-frequency pulsing to greatly reduce the computing time. Lastly, Weight Window is introduced as a strategy to decrease large deviations of weight due to the uses of variance reduction techniques.
    
    A figure of merit is necessary to evaluate the efficiency of a variance reduction technique. A number of possibilities for the figure of merit are explored, two of which offer robust figures of merit. One figure of merit is based on the relative error of a known target isotope (1/R2 T ) and another on the overall detection limit corrected by the relative error (1/Dk R2 T ). An
    automated Adaptive Variance-reduction Adjustment (AVA) tool is developed to iteratively define necessary parameters for some variance reduction techniques in a problem with a target isotope. Initial sample problems demonstrate that AVA improves both precision and accuracy of a target result in an efficient manner.
    
    Potential applications of MCise include molten salt fueled reactors and liquid breeders in fusion blankets. As an example, the inventory analysis of an actinide fluoride eutectic liquid fuel in the In-Zinerator, a sub-critical power reactor driven by a fusion source, is examined using MCise. The result reassures MCise as a reliable tool for inventory analysis of complex nuclear systems.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Phruksarojanakun, Phiphat},
    	year = {2007},
    }
    
  36. Benjamin J. Schmitt, " Accounting for Core Burnup in Reactor Analysis of the Unviersity of Wisconsin Nuclear Reactor", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2006)
    The University of Wisconsin Nuclear Reactor, a 1 MW TRIGA open-pool reactor, is an important part of the Nuclear Engineering program at the University of Wisconsin – Madison and has provided research and teaching opportunities for the past 40 years. In an earlier work, a model of the fresh reactor core at this facility was developed for MCNP and used to model several experiments. To allow the simulation of current conditions in the reactor, a model incorporating fuel burnup and isotopic decay needed to be designed and benchmarked. Two different simulation methods were examined for this model. MONTEBURNS, a code that links ORIGEN2 and MCNP5, was used to simulate burnup based on the previous MCNP5 model of the core. HELIOS, a two-dimensional deterministic lattice physics code was used with a 1-D diffusion model to simulate burnup in a three-dimensional core. A bank height adjustment module was designed and implemented in MONTEBURNS to ensure the model was approximately critical ( 1.001>keff>0.999) at every calculation step, with the aim of improving results from MONTEBURNS. The results were benchmarked against measured values of core critical bank height, core excess reactivity, and core shutdown margin, and against differential blade worth curves based on measured values. For both the MONTEBURNS and HELIOS models, core bank height had, on average, a deviation of 0.53 inches from the recorded values. Average absolute deviation of simulated shutdown margin values from actual shutdown margin values ranged between 0.50 % ρ for the HELIOS case to 1.38 % ρ for the high-power basic MONTEBURNS case. For the simulated excess reactivity values, the average absolute deviation of the simulated values from recorded values ranged from 2.09 % ρ for the basic low power MONTEBURNS case to 3.96 % ρ for HELIOS. The blade worth curves had integrated reactivities that were within 0.68% to 16.48% of recorded values. Overall, results were mixed, with the simulations having similar trends to recorded data, but with simulation errors giving inconsistencies in every measurement. Much more work needs to be done before these simulations are relied upon for critical information, but this work provides the basis for further, more accurate simulations of reactor burnup for the UW-Madison Nuclear Reactor.
    @phdthesis{schmitt_accounting_2006,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Accounting for {Core} {Burnup} in {Reactor} {Analysis} of the {Unviersity} of {Wisconsin} {Nuclear} {Reactor}},
    	abstract = {The University of Wisconsin Nuclear Reactor, a 1 MW TRIGA open-pool reactor, is an important part of the Nuclear Engineering program at the University of Wisconsin – Madison and has provided research and teaching opportunities for the past 40 years. In an earlier work, a model of the fresh reactor core at this facility was developed for MCNP and used to model several experiments. To allow the simulation of current conditions in the reactor, a model incorporating fuel burnup and isotopic decay needed to be designed and benchmarked. Two different simulation methods were
    examined for this model. MONTEBURNS, a code that links ORIGEN2 and MCNP5, was used to simulate burnup based on the previous MCNP5 model of the core. HELIOS, a two-dimensional deterministic lattice physics code was used with a 1-D diffusion model to simulate burnup in a three-dimensional core. A bank height adjustment module was designed and implemented in MONTEBURNS to ensure the model was approximately critical ( 1.001{\textgreater}keff{\textgreater}0.999) at every calculation step, with the aim of improving results from MONTEBURNS. The results were benchmarked against measured values of core critical bank height, core excess reactivity, and core shutdown margin, and against differential blade worth curves based on measured values. For both the MONTEBURNS and HELIOS models, core bank height had, on average, a deviation of 0.53 inches from the recorded values. Average absolute deviation of simulated shutdown margin values from actual shutdown margin values ranged between 0.50 \% ρ for the HELIOS case to 1.38 \% ρ for the high-power basic MONTEBURNS case. For the simulated excess reactivity values, the average absolute deviation of the simulated values from recorded
    values ranged from 2.09 \% ρ for the basic low power MONTEBURNS case to 3.96 \% ρ for HELIOS. The blade worth curves had integrated reactivities that were within 0.68\% to 16.48\% of recorded values. Overall, results were mixed, with the simulations having similar trends to recorded data, but with simulation errors giving inconsistencies in every
    measurement. Much more work needs to be done before these simulations are relied upon for critical information, but this work provides the basis for further, more accurate simulations of reactor burnup for the UW-Madison Nuclear Reactor.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Schmitt, Benjamin J.},
    	year = {2006},
    }
    
  37. Christopher Staum, " Characterization Of Gamma Radiation Fields At The University Of Wisconsin Nuclear Reactor", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2006)
    @phdthesis{staum_characterization_2006,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Characterization {Of} {Gamma} {Radiation} {Fields} {At} {The} {University} {Of} {Wisconsin} {Nuclear} {Reactor}},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Staum, Christopher},
    	year = {2006},
    }
    
  38. Milad Fatenejad, " Development and Use of Tools for Modularization of Activation Programs", BS Nuclear Engineerinng, University of Wisconsin-Madison, (8/23/2005)
    @phdthesis{fatenejad_development_2005,
    	type = {{BS} {Nuclear} {Engineerinng}},
    	title = {Development and {Use} of {Tools} for {Modularization} of {Activation} {Programs}},
    	school = {University of Wisconsin-Madison},
    	author = {Fatenejad, Milad},
    	month = aug,
    	year = {2005},
    }
    
  39. Paul W. Humrickhouse, " Development of a Monte Carlo Model of the University of Wisconsin Nuclear Reactor", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2005)
    @phdthesis{humrickhouse_development_2005,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {Development of a {Monte} {Carlo} {Model} of the {University} of {Wisconsin} {Nuclear} {Reactor}},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Humrickhouse, Paul W.},
    	year = {2005},
    }
    
  40. Geoffrey Bull, " A Transparent Methodology for Integration into Proliferation Resistance Determination", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2005)
    Part of the Gen IV initiative for new nuclear reactors is to make proliferation resistance an integral part of the design. Measuring or quantifying proliferation resistance for a nuclear system or nuclear material has been an ongoing challenge. Multi-attribute utility analysis has been a widely accepted strategy for meeting this challenge for over 20 years. These techniques have relied heavily on expert judgment not only for the relative weights assigned to each metric, but also for determining the general form for the utility functions. A new approach to the determination of proliferation resistance is trying to minimize the reliance on expert judgment. The formation of a transparent methodology, one straightforward and dependent upon strictly measurable or calculable properties, would be a first step in accomplishing this goal. Using the framework for determining proliferation resistance developed by the Proliferation Resistance and Physical Protection (PR&PP) Working Group at Argonne National Laboratory, a strategy for beginning a transparent methodology is proposed. A contribution to the transparent methodology from the proliferation resistance measure Proliferation Resources is analyzed and compared to an established proliferation resistance methodology. Five pathways for achieving the goal of nuclear explosives are analyzed and compared with respect to the resources, in units of dollars, required for success. The methodology shows that processing spent nuclear fuel is likely to be the least preferred proliferation pathway for a proliferator to pursue with respect to the resources required to do so. The processing of natural uranium, either through diffusion or centrifuge enrichment, will generally be a preferred pathway. The shape of the transparent methodology with respect to plutonium content differs from the methodology developed in the Accelerator Transmutation of Waste program. The general trend of decreasing proliferation resistance as the 239Pu content increases is the same between the two. This indicates that there may be simplified, transparent ways to assess the proliferation resistance of systems that minimize the use of expert judgment, relying instead on physical characteristics of systems and materials. Further development of proliferation methodologies may draw on aspects of both methodologies.
    @phdthesis{bull_transparent_2005,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {A {Transparent} {Methodology} for {Integration} into {Proliferation} {Resistance} {Determination}},
    	abstract = {Part of the Gen IV initiative for new nuclear reactors is to make proliferation resistance an
     integral part of the design. Measuring or quantifying proliferation resistance for a nuclear system
     or nuclear material has been an ongoing challenge. Multi-attribute utility analysis has been a
     widely accepted strategy for meeting this challenge for over 20 years. These techniques have
     relied heavily on expert judgment not only for the relative weights assigned to each metric, but
     also for determining the general form for the utility functions.
    
    A new approach to the determination of proliferation resistance is trying to minimize the
     reliance on expert judgment. The formation of a transparent methodology, one straightforward
     and dependent upon strictly measurable or calculable properties, would be a first step in
     accomplishing this goal. Using the framework for determining proliferation resistance developed
     by the Proliferation Resistance and Physical Protection (PR\&PP) Working Group at Argonne
     National Laboratory, a strategy for beginning a transparent methodology is proposed.
     
    
    A contribution to the transparent methodology from the proliferation resistance measure
     Proliferation Resources is analyzed and compared to an established proliferation resistance
     methodology. Five pathways for achieving the goal of nuclear explosives are analyzed and
     compared with respect to the resources, in units of dollars, required for success. The
     methodology shows that processing spent nuclear fuel is likely to be the least preferred
     proliferation pathway for a proliferator to pursue with respect to the resources required to do so.
     The processing of natural uranium, either through diffusion or centrifuge enrichment, will
     generally be a preferred pathway.
    
    The shape of the transparent methodology with respect to plutonium content differs from
     the methodology developed in the Accelerator Transmutation of Waste program. The general
     trend of decreasing proliferation resistance as the 239Pu content increases is the same between the
     two. This indicates that there may be simplified, transparent ways to assess the proliferation
     resistance of systems that minimize the use of expert judgment, relying instead on physical
     characteristics of systems and materials. Further development of proliferation methodologies
     may draw on aspects of both methodologies.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Bull, Geoffrey},
    	year = {2005},
    }
    
  41. Luke Olson, " UWNR Supercritical Water Radiolysis Experiment: Thermal-Hydraulics Analysis", MS Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (2005)
    Operating near and above the thermodynamic supercritical point of water promises to improve economics but temperature-related corrosion issues present a challenge for proposed future nuclear reactor concepts. For Rankine power cycles, one of the issues affecting corrosion is the radiolysis rate of water, which invariably alters water chemistry and thus affects corrosion rates. A fundamental experiment has been designed for water radiolysis at the UW nuclear reactor. The purpose of this experiment was to measure the neutron radiolysis rates of water at supercritical temperatures and pressures. The UWNR Supercritical Radiolysis Experiment is located at the University of Wisconsin Nuclear Reactor (UWNR) in the Mechanical Engineering Building in Madison, Wisconsin. The UWNR has four beam-ports that extend from the outside of the concrete reactor shield and structure to near the periphery of the core and the experiment uses beam-port #2. The experimental facility was to be able to vary in pressure from 10MPa to 35MPA and in temperature from 30oC to 500oC. Water purified to a resistance of 18 MΩ and purged with nitrogen or various other gases was pumped up to pressure at the desired flow-rate and then through the experimental apparatus. A length of HASTELLOY® tubing wrapped around a cartridge heater first heated the sample water. Then a joule-heated section of the tubing heated the sample water so that it would be at the desired temperature when it entered the irradiation volume subject to the neutron flux. The tubing in the irradiation volume was also joule heated in order to minimize the temperature difference between the entrance and exit of the irradiation volume. The sample water was then cooled down in the water radiation shield to about 30oC and exited the apparatus. Its pressure was then reduced to ambient pressure in a length of capillary tubing submerged in a cooling bath. The purpose of this thesis is to describe the thermal-hydraulic analysis that is associated with this radiolysis experiment. The interior of the apparatus was at a partial vacuum to minimize convective heat transfer, however heat transfer from conduction and radiation were still significant due to the increased temperatures and slow movement of the sample fluid. In the irradiation section, joule heating minimized the temperature difference from entrance to exit of the water sample. From data recorded during experiments, one could calculate a heat transfer coefficient and a view factor for the irradiation section to the apparatus. In the cartridge heater section, the heat transfer coefficient was calculated and compared to the Dittus-Boelter heat-transfer coefficient correlation. Challenges encountered in operation of this device included the stability of the temperature in the irradiation volume as well as heat loss due to radiation and conduction. Several aspects of the apparatus proved difficult to control. The lead and graphite shields took a long time to reach a steady state temperature. In addition, since the sample water came in through the same water shield that it exited and dumped its remaining heat into, the sample water would gradually increase in temperature before reaching the cartridge heater over the length of an experimental run. The temperature, raised and lowered many times during an experiment, would never attain complete equilibrium. It was common to see temperature variations of about a degree occur over about a half hour to an hour, which was deemed acceptable for the testing when taking the residence time of the water sample into account.
    @phdthesis{olson_uwnr_2005,
    	address = {Madison, WI, United States},
    	type = {{MS} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {{UWNR} {Supercritical} {Water} {Radiolysis} {Experiment}: {Thermal}-{Hydraulics} {Analysis}},
    	abstract = {Operating near and above the thermodynamic supercritical point of water promises to
     improve economics but temperature-related corrosion issues present a challenge for proposed
     future nuclear reactor concepts. For Rankine power cycles, one of the issues affecting
     corrosion is the radiolysis rate of water, which invariably alters water chemistry and thus
     affects corrosion rates. A fundamental experiment has been designed for water radiolysis at
     the UW nuclear reactor. The purpose of this experiment was to measure the neutron
     radiolysis rates of water at supercritical temperatures and pressures. The UWNR
     Supercritical Radiolysis Experiment is located at the University of Wisconsin Nuclear
     Reactor (UWNR) in the Mechanical Engineering Building in Madison, Wisconsin. The
     UWNR has four beam-ports that extend from the outside of the concrete reactor shield and
     structure to near the periphery of the core and the experiment uses beam-port \#2.
    
    The experimental facility was to be able to vary in pressure from 10MPa to 35MPA
     and in temperature from 30oC to 500oC. Water purified to a resistance of 18 MΩ and purged
     with nitrogen or various other gases was pumped up to pressure at the desired flow-rate and
     then through the experimental apparatus. A length of HASTELLOY® tubing wrapped around a cartridge heater first heated the sample water. Then a joule-heated section of the
     tubing heated the sample water so that it would be at the desired temperature when it entered
     the irradiation volume subject to the neutron flux. The tubing in the irradiation volume was
     also joule heated in order to minimize the temperature difference between the entrance and
     exit of the irradiation volume. The sample water was then cooled down in the water
     radiation shield to about 30oC and exited the apparatus. Its pressure was then reduced to
     ambient pressure in a length of capillary tubing submerged in a cooling bath.
    
    The purpose of this thesis is to describe the thermal-hydraulic analysis that is
     associated with this radiolysis experiment. The interior of the apparatus was at a partial
     vacuum to minimize convective heat transfer, however heat transfer from conduction and
     radiation were still significant due to the increased temperatures and slow movement of the
     sample fluid. In the irradiation section, joule heating minimized the temperature difference
     from entrance to exit of the water sample. From data recorded during experiments, one could
     calculate a heat transfer coefficient and a view factor for the irradiation section to the
     apparatus. In the cartridge heater section, the heat transfer coefficient was calculated and
     compared to the Dittus-Boelter heat-transfer coefficient correlation. Challenges encountered
     in operation of this device included the stability of the temperature in the irradiation volume
     as well as heat loss due to radiation and conduction.
    
    Several aspects of the apparatus proved difficult to control. The lead and graphite
     shields took a long time to reach a steady state temperature. In addition, since the sample
     water came in through the same water shield that it exited and dumped its remaining heat
     into, the sample water would gradually increase in temperature before reaching the cartridge
     heater over the length of an experimental run. The temperature, raised and lowered many
     times during an experiment, would never attain complete equilibrium. It was common to see
     temperature variations of about a degree occur over about a half hour to an hour, which was
     deemed acceptable for the testing when taking the residence time of the water sample into
     account.},
    	language = {English},
    	school = {University of Wisconsin-Madison},
    	author = {Olson, Luke},
    	year = {2005},
    }
    
  42. P.P.H. Wilson, " ALARA: Analytic and Laplacian Adaptive Radioactivity Analysis", PhD Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, (Apr 1999)
    @phdthesis{wilson_alara:_1999,
    	address = {Madison, WI, United States},
    	type = {{PhD} {Nuclear} {Engineering} and {Engineering} {Physics}},
    	title = {{ALARA}: {Analytic} and {Laplacian} {Adaptive} {Radioactivity} {Analysis}},
    	school = {University of Wisconsin-Madison},
    	author = {Wilson, P.P.H.},
    	month = apr,
    	year = {1999},
    	keywords = {ALARA, Accuracy, Linear Chains, Neutron Irradiation, Speed},
    }
    
  43. Paul P. H. Wilson, " Neutronics of the IFMIF Neutron Source: Development and Analysis", Dr.-Ing. Maschinenbau, Technical University of Karlsruhe, (1999)
    @phdthesis{wilson_neutronics_1999,
    	address = {Karlsruhe, Germany},
    	type = {Dr.-{Ing}. {Maschinenbau}},
    	title = {Neutronics of the {IFMIF} {Neutron} {Source}: {Development} and {Analysis}},
    	school = {Technical University of Karlsruhe},
    	author = {Wilson, Paul P. H.},
    	year = {1999},
    	note = {Maschinenbau; Nuclear Engineering Responses PPHW UW Thesis Ref \#25 FZKA-6218},
    	keywords = {ALARA, Accelerator-Driven, Deuterium, High Flux Test Module (HFTM), High Flux Test Region (HFTR), International Fusion Materials Irradiation Facility (IFMIF), Irradiation, Liquid Lithium, McDeLicious Code, Monte Carlo Neutron Transport Code, Neutron, damChar},
    }
    
  44. P.P.H. Wilson, " Two-Phase, Cross-Flow Induced Vibrations in Steam Generator U-Tubes", BASc Engineering Science, University of Toronto, (1992)
    @phdthesis{wilson_two-phase_1992,
    	address = {Toronto, Canada},
    	type = {{BASc} {Engineering} {Science}},
    	title = {Two-{Phase}, {Cross}-{Flow} {Induced} {Vibrations} in {Steam} {Generator} {U}-{Tubes}},
    	school = {University of Toronto},
    	author = {Wilson, P.P.H.},
    	year = {1992},
    }
    

icon for email icon for twitter icon for github icon for zotero

© 2020-2024 Computational Nuclear Engineering Research Group
Reliable Software Tools for the Analysis of Complex Nuclear Energy Systems

Powered by Bootstrap 4 Github Pages