Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

*Scientific Reports* **volume 12**, Article number: 12858 (2022)

Metrics details

A difficult problem concerns the determination of magnetic field components within an experimentally inaccessible region when direct field measurements are not feasible. In this paper, we propose a new method of accessing magnetic field components using non-disruptive magnetic field measurements on a surface enclosing the experimental region. Magnetic field components in the experimental region are predicted by solving a set of partial differential equations (Ampere’s law and Gauss’ law for magnetism) numerically with the aid of physics-informed neural networks (PINNs). Prediction errors due to noisy magnetic field measurements and small number of magnetic field measurements are regularized by the physics information term in the loss function. We benchmark our model by comparing it with an older method. The new method we present will be of broad interest to experiments requiring precise determination of magnetic field components, such as searches for the neutron electric dipole moment.

Magnetic field mapping is commonly used in many fields of science, medicine and technology such as particle accelerators, nuclear storage experiments^{1,2,3}, cardiac beat detection^{4}, magnetic resonance imaging (MRI)^{5} and magnetic indoor positioning systems (IPS)^{6,7}. For example, in nuclear and particle physics experiments, with one example being the search for the neutron electric dipole moment, it is often crucial to measure and control the magnetic field components in the experimental region, because these experiments are typically sensitive to perturbations in magnetic fields. An undetected disturbance in a magnetic field may result in systematic uncertainties and cause a limitation for the precision of the measured quantities. To minimize systematic uncertainties, magnetic field components should be monitored in real-time and any unwanted field should be compensated during the operation time of the experiment. Real-time measurement of the magnetic field in an experimental region of space is not always practical or feasible. In most cases, the experimental region is not accessible due to a physical enclosure (e.g., a setup placed in a vacuum chamber), or it could be that placing a magnetic field sensor inside the experimental region is too disruptive to the system.

There exist several approaches in the literature that can be utilized to solve the problems stated above. For instance, Solin et al.^{8} make use of Gaussian processes (GPs) to interpolate/extrapolate ambient magnetic fields. They train the model using a data set collected by a magnetic field sensor at different locations of space and reconstruct the whole ambient magnetic field. Another method is proposed by Nouri et al.^{9,10}. They introduced a non-disruptive magnetic field mapping method using exterior measurements at fixed locations and leverage the multipole expansion of the magnetic field vector. Expanding the magnetic field to some finite degree (n=N), they provide a systematic way to optimize sensor locations and fit the unknown coefficients of the multipole expansion using the data from those exterior sensor measurements. This method is susceptible to noise in the data and, since the multipole expansion terms need to be picked to a specific field profile, the coefficients of the expansion terms are not regularized.

In this paper, we propose a robust way of predicting the magnetic field vector in the experimental region. In order to accomplish this, we utilize physics-informed neural networks (PINNs)^{11}. PINNs propose a way to incorporate prior physical knowledge about the system in terms of its partial differential equations, into the deep neural networks while still being able to utilize their universal function approximator property. With PINNs, data and mathematical models of physics are combined in a smooth way, even in situations that are only partially understood, uncertain, and have a lot of dimensions. In noisy and high-dimensional situations, physics-informed learning blends data and mathematical models easily and can solve general inverse problems extremely successfully^{12,13,14}. Unlike the method proposed in Refs.^{9,10}, our method does not require prior knowledge of the multiple expansion terms to be fitted. This special type of neural network regularizes the output function (magnetic field prediction) during the training process by requiring the output to satisfy Maxwell’s equations, specifically Ampere’s law and Gauss’ law for magnetism.

In this work, we are interested in predicting the magnetic field components inside a three dimensional space enclosed by an external surface *S* by utilizing the knowledge of the magnetic field in some number of locations on the surface *S*. Assuming there are no free currents, (varvec{J}=0), and no magnetization, (varvec{M}=0), in the region of interest, the partial differential equations that govern the static magnetic field are quite concise. They are

and

Therefore, it is possible to find a magnetic field that satisfies (1) and (2) together with the knowledge of the magnetic field at some number of locations . We choose those locations on the surface of a closed region *S* of which we are interested in approximating the magnetic field inside (Fig. 1). Of course, according to the electromagnetism uniqueness theorem, having a finite number of data on a surface does not guarantee a unique solution to equations (1) and (2). However, as we demonstrate later in this paper, our results indicate that one can successfully approximate the true solution by having a sufficient number of data scattered on the surface.

In this section, we review the field monitoring method described in Ref.^{9}. Equation (2) indicates that the magnetic field vector can be written as a gradient of a scalar magnetic potential function

Substituting (3) into Eq. (1) tells us that the magnetic scalar potential satisfies Laplace’s equation

and the solution of Laplace’s equation in spherical coordinates is given by

where (P_l^m) are the associated Legendre polynomials, and (a_{lm}) and (b_{lm}) are expansion coefficients. The magnetic field can be obtained by calculating the gradient of the magnetic scalar potential, (varvec{B}= -nabla Phi _M(varvec{r})). Absorbing (a_{lm}) and (b_{lm}) in a coefficient (c_n), we can write the magnetic field in a compact form

where (varvec{f}_n(x,y,z)) are vector basis functions satisfying (nabla cdot varvec{f}_n = 0) and (nabla times varvec{f}_n = 0).

To illustrate, the first 10 (varvec{f}_n) basis vector functions are listed in Table 1. The right-hand side of the Eq. (6) is expanded to some finite order (n=N) and the magnetic field vector inside the volume can be interpolated using linear regression techniques.

Magnetic field sensors (red dots) placed on a surface *S* to predict the magnetic field (varvec{B}) in the inner region.

The exact values of the partial derivatives in (1) and (2) can be calculated by automatic differentiation^{11}, which is implemented in some well-known machine learning libraries such as TensorFlow^{15} and PyTorch^{16}. The neural network we train to approximate the magnetic field inside the region will have the structure as shown in Fig. 2. The hyperbolic tangent is used for the activation of each hidden layer. The other tested activation functions have not performed as well as the hyperbolic tangent for this network architecture. The number of hidden layers are chosen to be 4 and 8 with each having 32 or 64 neurons. The performance of these 4 different-sized networks are discussed later.

Network takes 3 inputs, (*x*, *y*, *z*) coordinates, and outputs the magnetic field (varvec{B}). Automatic differentiation is used to calculate the exact derivatives of the output (varvec{B}) with respect to the input parameters.

Then, the network can be trained by a combined loss function of data, curl and divergence losses

where

and

where the points (varvec{r}_{varvec{B}}^{i}) and (varvec{r}_{d}^{i}) denote the positions of the magnetic sensors and the collocation points, respectively. (N_{varvec{B}}) is the number of the magnetic field sensors, (N_f) is the number of collocation points in the domain and (varvec{B}_{s}) is the measured magnetic field vector at (varvec{r}_{d}^{i}). The parameter (lambda) in Eq. (7) can be adjusted according to the performance of the network. The collocation points, (varvec{r}_{d}^{i}), in Eqs. (9) and (10) are sampled from the volume encapsulated by the surface *S* (Fig. 1) and can be chosen to be fixed throughout the training process^{11}. However, randomly choosing collocation points in each epoch leads to a quicker convergence as well as more accurate results. This is partly due to being able to choose fewer number of collocation points, and since they are assigned randomly each iteration, they represent the domain better than any fixed collocation points scheme. The ADAM optimizer^{17}, an adaptive method for gradient-based first-order optimization, is what we make use of in order to minimize the loss function 7. The general procedure for training is given in Algorithm 1.

In the following example, we will demonstrate the capability of our magnetic field prediction model by placing an arbitrary number of triple-axis magnetic sensors on the surface of a cube. Magnetic field sensors will be placed on the cube randomly and we will generate training and validation data by using Biot–Savart law for circular current loop(s). In the next section, we will give the analytical expression of the three dimensional magnetic field vector of a single circular current loop and then we will construct a higher order asymmetric magnetic field by placing multiple loops with different currents to benchmark our method on.

We begin by demonstrating the ability of our magnetic field reconstruction method by considering the magnetic field of a simple circular current loop (in arbitrary units). The magnetic field components of a circular current loop with radius *a* are given by^{18,19}

with

where *E*(*k*) and *K*(*k*) are elliptic integrals, (rho ^2 equiv x^2 + y^2) , (alpha ^2 equiv a^2 + r^2 -2arho), (beta ^2 equiv a^2 + r^2 +2 a rho) and (requiv sqrt{x^2+y^2+z^2}) and (z = rcos {theta }). In this work, we will work in arbitrary units by setting (C = 1).

Configuration of the test model. Red circles: current loops, blue cube: cubical sensor array and red dot: triple-axis magnetic field sensors.

We want to show the potential of the network by comparing it to the multipole expansion method for various sensor counts and different types and levels of noises. To create a non-uniform higher order magnetic field, we positioned 8 circular loops of different current values at positions ((x=pm 1.01, y=pm 1,z=pm 4)) and the triple-axis magnetic sensors are placed randomly on the surface of a cube with side length (L=2) centered at the origin. The configuration is illustrated in Fig. 3. Our goal is to predict the magnetic field in the inner region of the surface.

The number of hidden layers and neurons of the network characterizes the complexity of the function it can approximate. Having more hidden layers and neurons should not negatively affect the performance, training larger networks are slower and may require more care with initialization and regularization of the weights^{20}. In this example, larger network sizes resulted in better performance as shown in Table 2 as expected. Models were trained for less than 2 min for all cases on an NVIDIA RTX 3080 GPU.

Top: Magnitude plot of the predicted magnetic field along with exact magnetic field and the error at snapshot (z=0.72), middle: comparison of the predicted and the other method’s solutions’ *x*, *y*, *z* directions with 18 sensor data, bottom: comparison of the predicted and the other method’s solutions’ *x*, *y*, *z* directions with 30 sensor data.

Greater sensor counts gives more information about the magnetic field of the system, and we would expect the network to be able to use that information to predict magnetic field better. As shown in Table 2, having more sensory information has led to a better performance for all network structures. Moreover, fewer sensor counts has not led to a divergence from the exact magnetic field. This is not the case with the multipole expansion method as shown in Table 3. The other method seems to suffer with relatively few sensors and higher order versions overfit the sensor data. Decreasing the order in this case leads to better results but due to lower orders having fewer basis functions, the method is not able to predict the exact magnetic field as well as our network. This can also be seen in Figs. 4 and 5.

Top: Comparison of the predicted and the other method’s solutions’ *x*, *y*, *z* directions with 18 sensor data with Gaussian noise with (sigma =1.0times 10^{-2}), Bottom: Comparison of the predicted and the other method’s solutions’ *x*, *y*, *z* directions with 30 sensor data with Gaussian noise with (sigma =1.0times 10^{-2}).

Performance of the network when Gaussian noise is introduced to the sensory information is given in Table 4. This noise has led to a further deterioration of the performance for the multipole expansion method. Our method has also been affected, however, performed better across various sensor counts.

To demonstrate our methodology using actual data, we conducted an experiment in which a Bartington triple axis magnetic field probe (Mag-13MS1000) was moved to the locations of the training data collection points. In order to generate a non-uniform magnetic field, two rectangular coils are stacked vertically and driven with different current magnitudes in the opposite directions (Fig. 6). Each face of the coils is a printed circuit board (PCB) with dimensions 55 cm (times) 16 cm containing 50 parallel line traces along the long side of the PCB. A current with magnitude 1 A flows counterclockwise and another current with magnitude 0.6 A flows clockwise in the top and bottom coils respectively.

In order to isolate the field generated by the coils, at each measurement location, the data is collected as the difference between the sensor measurements with the coil turned on and off. Then, we trained the network for the magnetic field data collected using the magnetic mapping system. The training domain is chosen as a cube with 40 cm side length placed at the center of the coils. Performance benchmarks of our network and the other method on the measurement data is given in Figs. 7 and 8.

Setup of the experiment: Left: 3D magnetic field mapping system is sitting underneath the rectangular coils. Right: The triple-axis magnetometer and inside the coils.

Comparison of the predicted and the other method’s solutions along (x =10), (y=10) axis. Top: 18 sensor data, bottom: 30 sensor data.

Comparison of the predicted and the other method’s solutions along (y =10), (z=10) axis. Top: 18 sensor data, bottom: 30 sensor data.

In this study, we presented an efficient and practical method for mapping the magnetic field of inaccessible locations. We encoded previous knowledge from Maxwell’s equations for magnetostatics into a physics-informed neural network model for magnetic field prediction in regions where direct measurements are not possible.

We provided two experiments that proved the practicability of the proposed method. A simulated experiment proved the value of incorporating extra physics knowledge into the model. Mapping the magnetic field of a square coil system illustrated the effectiveness of the approximation technique in real world applications.

Our method compared with the multipole expansion method indicated better performance results across various sensor counts and noise levels both in simulation data and real world measurement data.

The datasets generated and/or analysed during the current study are available in the Github repository, https://github.com/ucoskun/bmapping-pinn/tree/main/data.

Ahmed, M. *et al.* A new cryogenic apparatus to search for the neutron electric dipole moment. *J. Instrum.* **14**, P11017–P11017. https://doi.org/10.1088/1748-0221/14/11/p11017 (2019).

CAS Article Google Scholar

Abi, B. *et al.* Measurement of the positive muon anomalous magnetic moment to 0.46 ppm. *Phys. Rev. Lett.* **126**, 141801. https://doi.org/10.1103/PhysRevLett.126.141801 (2021).

ADS CAS Article PubMed Google Scholar

Gonzalez, F. M. *et al.* Improved neutron lifetime measurement with (rm UCNtau). *Phys. Rev. Lett.* **127**, 162501. https://doi.org/10.1103/PhysRevLett.127.162501 (2021).

ADS CAS Article PubMed Google Scholar

Rondin, L. *et al.* Nanoscale magnetic field mapping with a single spin scanning probe magnetometer. *Appl. Phys. Lett.* **100**, 153118. https://doi.org/10.1063/1.3703128 (2012).

ADS CAS Article Google Scholar

Grover, V. P. B. *et al.* Magnetic resonance imaging: Principles and techniques: Lessons for clinicians. *J. Clin. Exp. Hepatol.* **5**, 246–255. https://doi.org/10.1016/j.jceh.2015.08.001 (2015).

Article PubMed PubMed Central Google Scholar

Le Grand, E. & Thrun, S. 3-axis magnetic field mapping and fusion for indoor localization. In *2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)*, 358–364. https://doi.org/10.1109/MFI.2012.6343024 (2012).

Haverinen, J. & Kemppainen, A. Global indoor self-localization based on the ambient magnetic field. *Robot. Auton. Syst.* **57**, 1028–1035. https://doi.org/10.1016/j.robot.2009.07.018 (2009) (**5th International Conference on Computational Intelligence, Robotics and Autonomous Systems (5th CIRAS)**).

Article Google Scholar

Solin, A., Kok, M., Wahlström, N., Schön, T. B. & Särkkä, S. Modeling and interpolation of the ambient magnetic field by Gaussian processes. *IEEE Trans. Robot.* **34**, 1112–1127. https://doi.org/10.1109/TRO.2018.2830326 (2018).

Article Google Scholar

Nouri, N. *et al.* A prototype vector magnetic field monitoring system for a neutron electric dipole moment experiment. *J. Instrum.* **10**, P12003–P12003. https://doi.org/10.1088/1748-0221/10/12/p12003 (2015).

Article Google Scholar

Nouri, N. & Plaster, B. Systematic optimization of exterior measurement locations for the determination of interior magnetic field vector components in inaccessible regions. *Nucl. Instrum. Methods Phys. Res. A* **767**, 92–98. https://doi.org/10.1016/j.nima.2014.08.026 (2014).

ADS CAS Article Google Scholar

Raissi, M., Perdikaris, P. & Karniadakis, G. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. *J. Comput. Phys.* **378**, 686–707. https://doi.org/10.1016/j.jcp.2018.10.045 (2019).

ADS MathSciNet Article MATH Google Scholar

Schiassi, E., De Florio, M., D’Ambrosio, A., Mortari, D. & Furfaro, R. Physics-informed neural networks and functional interpolation for data-driven parameters discovery of epidemiological compartmental models. *Mathematics.*https://doi.org/10.3390/math9172069 (2021).

Article Google Scholar

Schiassi, E. *et al.* Extreme theory of functional connections: A fast physics-informed neural network method for solving ordinary and partial differential equations. *Neurocomputing* **457**, 334–356. https://doi.org/10.1016/j.neucom.2021.06.015 (2021).

Article Google Scholar

Dwivedi, V. & Srinivasan, B. Physics informed extreme learning machine (PIELM)-a rapid method for the numerical solution of partial differential equations. *Neurocomputing* **391**, 96–118. https://doi.org/10.1016/j.neucom.2019.12.099 (2020).

Article Google Scholar

Abadi, M. *et al.* TensorFlow: Large-scale machine learning on heterogeneous systems (2015). Software available from tensorflow.org.

Paszke, A. *et al.* Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing Systems 32* (eds. Wallach, H. *et al.*) 8024–8035 (Curran Associates, Inc., 2019).

Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. https://doi.org/10.48550/ARXIV.1412.6980 (2014).

Jackson, J. D. *Classical Electrodynamics*, 3rd ed. (Wiley, 1999).

Bartberger, C. L. The magnetic field of a plane circular loop. *J. Appl. Phys.* **21**, 1108–1114. https://doi.org/10.1063/1.1699551 (1950).

ADS MathSciNet Article MATH Google Scholar

He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In *Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.* (2016).

Download references

This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award Number DE-SC0014622.

Department of Physics and Astronomy, University of Kentucky, Lexington, KY, 40506, USA

Umit H. Coskun & Brad Plaster

The Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA, 24061, USA

Bilgehan Sel

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

U.H.C. developed the physical concept, simulations and conducted the experiment. B.S. constructed the network model, processed the data and trained the network. B.P. supervised the project. All authors interpreted the results and wrote the manuscript.

Correspondence to Umit H. Coskun.

The authors declare no competing interests.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

Coskun, U.H., Sel, B. & Plaster, B. Magnetic field mapping of inaccessible regions using physics-informed neural networks. *Sci Rep* **12, **12858 (2022). https://doi.org/10.1038/s41598-022-15777-4

Download citation

Received:

Accepted:

Published:

DOI: https://doi.org/10.1038/s41598-022-15777-4

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Advertisement

Advanced search

© 2022 Springer Nature Limited

Sign up for the *Nature Briefing* newsletter — what matters in science, free to your inbox daily.