Research/Technical Note | | Peer-Reviewed

Tuning the Training of Neural Networks by Using the Perturbation Technique

Received: 5 June 2025     Accepted: 21 June 2025     Published: 6 July 2025
Views:       Downloads:
Abstract

The calculation of biases and weights in neural network are being calculated or trained by using stochastic random descent method at the layers of the neural networks. For increasing efficiency and performance, the perturbation scheme is introduced for fine tuning of these calculations. It is aimed at introducing the perturbation techniques into training of artificial neural networks. Perturbation methods are for obtaining approximate solutions with a small parameter ε. The perturbation technique could be used in several combination with other training methods for minimization of data used, training time and energy. The introduced perturbation parameter ε can be selected due nature of training of the data. The determination of ε can be found through several trials. The application of the stochastic random descent method will increase training time and energy. The proper combined use with the perturbation will shorten training time. There exists abundance of usage of both methods, however the combined use will lead optimal solutions. A proper cost function can be used for optimum use of the perturbation parameter ε. The shortening the training time will lead determination of dominant inputs of the out values. One of the essential problems of training is the energy consuming will be decreased by using hybrid training methods.

Published in American Journal of Artificial Intelligence (Volume 9, Issue 2)
DOI 10.11648/j.ajai.20250902.11
Page(s) 107-109
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Neural Networks, Stochastic Random Steepest Descent, Perturbation Techniques

1. Introduction
The neural networks with the concept of “a thinking machine” can be traced of the antiquity. The calculation of neural networks Hardesty and Yang are being performed by the stochastic random descent method Bouwmeesteretal . An efficient method, the perturbation method can be used as combined, Boccara and Nayfeh . The focus on the important events that led to the development of thinking around neural networks Shao and Shen , Levitan and Kaczmarek and Rosenblatt . Human brain functions and structures are leading to the design of Artificial Neural Networks (ANNs). The human brain is being created to processing formation and inputs by using neural networks. This concept is becoming a base of artificial intelligence (AI). This methodology is used many AI applications. These applications are prediction, language and image recognition models. The ANN (Artificial Neural Network) is a numerical and mathematical model by using the methodology that using methodology similar to the human brain. The methodology is collecting data to reach the conclusion by using a layered structure. An ANN contain in a input layer, hidden and output layers. The layers are presented by nodes which connected each other by thresholds and weights. It should though that each node has own individual linear regression model,
yi=bi+jwij (1)
Here,
yj, xj,biand wijare outputs, inputs, biases and weights,
respectively.
i, j and k are the indices of outputs, inputs and perturbation order.
i = 1 to n1, j=1 to n2;
n1= number of inputs and n2= number of outputs.
If input layers are considered, biases and weights are chosen according to the importance of the input variables. All input values are multiplied by the related weights. This process is passing data from one layer to forward layer finally output layer. The selected nodal values, biases and weight values may result desired values. The stochastic random steepest descent method, or stochastic gradient descent, is an optimization algorithm used to find the minimum of the error (cost) function for determining the biases and weights, Bishop , Needell, et al. and Bottou . It's an iterative method and makes efficient for high dimensional problems and large datasets. The solution obtained from stochastic random steepest descent method is tuned or extended by using the perturbation technique Bender , Holmes and Wiesel . The perturbation theory provides methods for obtaining an acceptable solution to many mathematical problems; the essential feature of the technique is dividing the problem into "solvable" problem. In perturbation solutions, the solution is expressed as a power series in a small parameter ε. The first solution is the obtainable solution 0th (zeroth order) problem. Further terms in the perturbation algorithm at higher powers of ε usually are smaller. A perturbation solution is achieved by truncating the series, most likely the first two terms are sufficient, or the solution to the known problem and the 'first order' perturbation correction. The perturbation provides an expression for the expected solution in terms of a power series which are known as a perturbation series in parameter ε that presents the deviation from the essential problem. The leading term in the perturbation series is the solution of the essential problem, while further terms present the deviation in the solution the approximation to the solution in a series in the small parameter ε.
2. Analysis
The equation for neural net work is for the ith output;
yi=ε0(bi0+jwij0xj)+ε1 (bi1+jwij1xj)+ε2 (bi2+ jwij2xj)+ (2)
The appropriate bi0and wij0 could be found from Equation (1), Vapnik and Goodfellow ,
yi=bi0+jwij0xj, (3)
where bikandwijkare biases and weights of perturbation at kth level.
The indice k may change from 1 to desired accuracy level. The perturbation expansion through the powers εfor biases and weights are as follows:
bi=ε0bi0+ε1bi1+ε2bi2+ and(4)
wij=ε0wij0+ε1wij1+ε2wij2+. (5)
The stochastic random decent method or similar methods can be used for evaluation or training of the zeroth order terms.
For higher order perturbation terms, bijk an dwijk which are tuning values of biases and weights, can be obtained from the following equation, for k1:
yil=0k-1εl(bil+jwijlxj)=εk (bik+jwijkxj). (6)
The value of εand Equation (6) are for tuning of the values obtained from the Equation (3). The further training of the data could be performed by using the stochastic random decent method. These alternatives are for improving and tuning of training of the neural network.
This methodology is a hybrid numerical method for faster and robust computations in AI problems.
3. Conclusion
The perturbation method which actively many fields and disciplines, is blended into the stochastic random steepest descent method for tuning the results of the stochastic random steepest descent method. Perturbation theory is used in a wide range of problems. The aim is at improving stochastic random steepest descent method or similar methods. The field in general remains actively and heavily researched across multiple disciplines.
Author Contributions
Huseyin Murat Cekirge is the sole author. The author read and approved the final manuscript.
Conflicts of Interest
The author declares no conflicts of interest.
References
[1] Hardesty L (14 April 2017). "Explained: Neural networks". MIT News Office. Archived from the original on 18 March 2024. Retrieved 2 June 2022.
[2] Yang Z (2014). Comprehensive Biomedical Physics. Karolinska Institute, Stockholm, Sweden: Elsevier. p. 1. Archived from the original on 28 July 2022. Retrieved 28 July 2022.
[3] Bouwmeester, Henricus; Dougherty, Andrew and Knyazev, Andrew V. (2015). "Nonsymmetric Preconditioning for Conjugate Gradient and Steepest Descent Methods". Proceed. Computer Science. 51: 2 76–285. ArXiv: 1212.6680.
[4] Boccara, N. Essentials of Mathematica: With Applications to Mathematics and Physics. Springer, New York, 2007.
[5] Nayfeh, Ali H. (2004) Perturbation Methods, Wiley-VCH Verlag GmbH & Co. Kga A.
[6] Shao, Feng; Shen, Zheng (January 9, 2022)."How can artificial neural networks approximate the brain?". Front. Psychol. 13: 970214.
[7] Levitan, Irwin; Kaczmarek, Leonard (August 19, 2015). "Inter cellular communication". The Neuron: Cell and Molecular Biology (4th ed.). New York, NY: Oxford University Press. pp. 153–328.
[8] Rosenblatt, F. (1958). "The Perceptron: A Probabilistic Model For Information Storage and Organization In The Brain". Psychological Review. 65(6): 386–408. Cite Seer X 10.1.1.588.3775.
[9] Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer.
[10] Needell, D., Srebro, N. & Ward, R. (2015, January). Stochastic gradient descent weighted sampling, and the randomized Kaczmarz algorithm.
[11] Bottou, L. (1991) Stochastic gradient learning in neural networks. Proceedings of Neuro-Nimes, 91.
[12] Bender, Carl M. (1999). Advanced mathematical methods for scientists and engineers I: asymptotic methods and perturbation theory. Steven A. Orszag. New York, NY: Springer.
[13] Holmes, Mark H. (2013). Introduction to perturbation methods (2nd ed.). New York: Springer.
[14] Wiesel, William E. (2010). Modern Astrodynamics. Ohio: Aphelion Press p. 107.
[15] Vapnik, V. N., (1998). The nature of statistical learning theory (Corrected 2nd print. ed.). New York Berlin Heidelberg: Springer.
[16] Goodfellow, Ian, Bengio, Yoshua and Courville, Aaron (2016). Deep Learning. MIT Press. Archived from the original on 16 April 2016. Retrieved 1 June 2016.
Cite This Article
  • APA Style

    Cekirge, H. M. (2025). Tuning the Training of Neural Networks by Using the Perturbation Technique. American Journal of Artificial Intelligence, 9(2), 107-109. https://doi.org/10.11648/j.ajai.20250902.11

    Copy | Download

    ACS Style

    Cekirge, H. M. Tuning the Training of Neural Networks by Using the Perturbation Technique. Am. J. Artif. Intell. 2025, 9(2), 107-109. doi: 10.11648/j.ajai.20250902.11

    Copy | Download

    AMA Style

    Cekirge HM. Tuning the Training of Neural Networks by Using the Perturbation Technique. Am J Artif Intell. 2025;9(2):107-109. doi: 10.11648/j.ajai.20250902.11

    Copy | Download

  • @article{10.11648/j.ajai.20250902.11,
      author = {Huseyin Murat Cekirge},
      title = {Tuning the Training of Neural Networks by Using the Perturbation Technique},
      journal = {American Journal of Artificial Intelligence},
      volume = {9},
      number = {2},
      pages = {107-109},
      doi = {10.11648/j.ajai.20250902.11},
      url = {https://doi.org/10.11648/j.ajai.20250902.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20250902.11},
      abstract = {The calculation of biases and weights in neural network are being calculated or trained by using stochastic random descent method at the layers of the neural networks. For increasing efficiency and performance, the perturbation scheme is introduced for fine tuning of these calculations. It is aimed at introducing the perturbation techniques into training of artificial neural networks. Perturbation methods are for obtaining approximate solutions with a small parameter ε. The perturbation technique could be used in several combination with other training methods for minimization of data used, training time and energy. The introduced perturbation parameter ε can be selected due nature of training of the data. The determination of ε can be found through several trials. The application of the stochastic random descent method will increase training time and energy. The proper combined use with the perturbation will shorten training time. There exists abundance of usage of both methods, however the combined use will lead optimal solutions. A proper cost function can be used for optimum use of the perturbation parameter ε. The shortening the training time will lead determination of dominant inputs of the out values. One of the essential problems of training is the energy consuming will be decreased by using hybrid training methods.},
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Tuning the Training of Neural Networks by Using the Perturbation Technique
    AU  - Huseyin Murat Cekirge
    Y1  - 2025/07/06
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ajai.20250902.11
    DO  - 10.11648/j.ajai.20250902.11
    T2  - American Journal of Artificial Intelligence
    JF  - American Journal of Artificial Intelligence
    JO  - American Journal of Artificial Intelligence
    SP  - 107
    EP  - 109
    PB  - Science Publishing Group
    SN  - 2639-9733
    UR  - https://doi.org/10.11648/j.ajai.20250902.11
    AB  - The calculation of biases and weights in neural network are being calculated or trained by using stochastic random descent method at the layers of the neural networks. For increasing efficiency and performance, the perturbation scheme is introduced for fine tuning of these calculations. It is aimed at introducing the perturbation techniques into training of artificial neural networks. Perturbation methods are for obtaining approximate solutions with a small parameter ε. The perturbation technique could be used in several combination with other training methods for minimization of data used, training time and energy. The introduced perturbation parameter ε can be selected due nature of training of the data. The determination of ε can be found through several trials. The application of the stochastic random descent method will increase training time and energy. The proper combined use with the perturbation will shorten training time. There exists abundance of usage of both methods, however the combined use will lead optimal solutions. A proper cost function can be used for optimum use of the perturbation parameter ε. The shortening the training time will lead determination of dominant inputs of the out values. One of the essential problems of training is the energy consuming will be decreased by using hybrid training methods.
    VL  - 9
    IS  - 2
    ER  - 

    Copy | Download

Author Information