In this talk, we present a part of our research on Laplacian Eigenmaps, a non-linear dimensionality reduction technique connected with two main hyperparameters. We explain our theoretical claim about setting one of these parameters such that this setting induces the weights (determining the result) to be the most stable against possible imprecision in the input data. Then, on selected examples, we demonstrate how both parameters balance each other out to achieve the most intuitive result.