Math Genius: Symmetric nodes and interpolation error

Original Source Link

Given the nodes $x_0<…<x_n$ that are symmetric, $x_{n-j}=x_j$, how do I show that for $xin text{conv}{x_j}$
$$
|w_n(x)|=|(x-x_0)(x-x_1)…(x-x_n)|leqfrac{|x_n-x_0|^{n+1}}{2^{n+1}}
$$

I get that I can use the symmetry $x_{n-j}=x_j Leftrightarrow x-x_{n-j}=x-x_j$. Then the book does the step where I do not understand the last equality only:
$$
|(x-x_j)(x-x_{n-j})|=|(x-x_j)(x+x_{j})|=|x^2-x_j^2|leq(?)frac{1}{4}|x_n-x_0|^2
$$

How do I get this last inquality?

Tagged : / /

Code Bug Fix: Reindexing provides values instead of NaN for missing values

Original Source Link

I want to complete my time serie of % humidity with missing records (or rows). Sensors are designed to record a mean value each 15min, so that is my target frequency. Here an example for one station (not the best in terms of gaps…) but I have 36 stations of measurements, 6 parameters and more than 24 000 records each to homogenize.

I choose columns of datetime and % humidity for example :

humdt = data["la-salade"][["datetime","humidite"]]

               datetime  humidite
0   2019-07-09 08:30:00        87
1   2019-07-09 11:00:00        87
2   2019-07-09 17:30:00        82
3   2019-07-09 23:30:00        80
4   2019-07-11 06:15:00        79
5   2019-07-19 14:30:00        39

I set datetime as an index : (so far it works)

humdt["datetime"] = pd.to_datetime(humdt["datetime"])
humdt = humdt.set_index("datetime",drop=True)

                     humidite
datetime
2019-07-09 08:30:00        87
2019-07-09 11:00:00        87
2019-07-09 17:30:00        82
2019-07-09 23:30:00        80
2019-07-11 06:15:00        79
2019-07-19 14:30:00        39

Beside this, I prepare a datetime range matching my expectations (15min frequency) :

date_rng = pd.period_range(start=debut, end=fin, freq='15min').strftime('%Y-%m-%d %H:%M:%S')
date_rng = pd.DataFrame(date_rng)
date_rng.columns = ["datetime"]

Then, I use this range to reindex my humidity values (expecting NaN when missing) :

humdt = humdt.reindex(pd.DatetimeIndex(date_rng["datetime"]))

                     humidite
datetime
2019-07-09 08:30:00      87.0
2019-07-09 08:45:00      88.0
2019-07-09 09:00:00      88.0
2019-07-09 09:15:00      88.0
2019-07-09 09:30:00      89.0
2019-07-09 09:45:00      89.0
2019-07-09 10:00:00      88.0
2019-07-09 10:15:00      88.0
2019-07-09 10:30:00      88.0
2019-07-09 10:45:00      88.0
2019-07-09 11:00:00      87.0

As a result, I get values of humidity from nowhere… not even a classical linear interpolation (ex : between 87% at 08H30 and 87% at 11H00). Please, help me, I have no clue what is going on… (also tried to merge and resampling, as here behavior is not as expected). Thank you !

You can add attribute fill_value to df.reindex.

humdt = humdt.reindex(pd.DatetimeIndex(date_rng["datetime"]), fill_value=np.nan)

This will fill new values with NaN

Tagged : / / / /

Server Bug Fix: How can I resample a list of {x,y} data for evenly spaced points?

Original Source Link

I have generated a list of {x,y} points using a position function of a particle moving in 2D space. If you run the following code you’ll see a small bit of the particle’s motion (I cut out most of it since it is a very large list of points). You can see that each of the points are not evenly spaced from each other. However, I want to interpolate between each point so that I can get a “resample” of points which are all evenly spaced. That way I can get a proper histogram of the particles position on a grid. I guess using ListInterpolation[] isn’t working because it is not a continuous function or something!

myList = {{12.633, 0.}, {12.5796, 1.05904}, {12.4203, 2.10566}, {12.1574, 
  3.12766}, {11.7949, 4.11327}, {11.338, 5.05131}, {10.7935, 
  5.93142}, {10.1693, 6.74423}, {9.47453, 7.48147}, {8.71909, 
  8.13617}, {7.91375, 8.70271}, {7.06978, 9.17696}, {6.19883, 
  9.55626}, {5.3127, 9.83955}, {4.42313, 10.0273}, {3.54156, 
  10.1214}, {2.67902, 10.1253}, {1.84582, 10.0439}, {1.05149, 
  9.88323}, {0.304511, 9.65046}, {-0.387734, 9.35381}, {-1.01913, 
  9.00228}, {-1.58492, 8.60551}, {-2.08174, 8.17357}, {-2.50769, 
  7.71676}, {-2.86234, 7.24541}, {-3.14668, 6.7697}, {-3.36313, 
  6.2994}, {-3.51543, 5.84373}, {-3.60852, 5.4112}, {-3.64849, 
  5.00937}, {-3.64236, 4.64479}, {-3.59797, 4.32283}, {-3.52378, 
  4.04761}, {-3.42869, 3.82189}, {-3.32184, 3.64707}, {-3.21242, 
  3.52316}, {-3.10945, 3.44876}, {-3.0216, 3.42114}, {-2.957, 
  3.43629}, {-2.92306, 3.48901}, {-2.92631, 3.57302}, {-2.97223, 
  3.68114}, {-3.06517, 3.80538}, {-3.20822, 3.93715}, {-3.40314, 
  4.06746}, {-3.65031, 4.18708}, {-3.9487, 4.28677}, {-4.29591, 
  4.35747}, {-4.68816, 4.39047}, {-5.12039, 4.37766}, {-5.58633, 
  4.31165}, {-6.07861, 4.18598}}

ListPlot[myList, AspectRatio -> 1]

From other examples I have seen, interpolation isn’t so simple, so any help here would be appreciated!

For linear interpolation, you can interpolate the x and y components separately. Each component becomes a one-dimensional function. The function value is e.g. the x component; the function parameter is the arc length. Since the parameter is the arc length, that makes it easy to sample from it uniformly. In my code below I make it even easier by rescaling the arc length so it runs from 0 to 1. All we have to do to sample the curve uniformly, then, is to sample the interpolation functions uniformly from 0 to 1.

arcLength = [email protected][0]@Accumulate[Norm /@ Differences[myList]];
{x, y} = Transpose[myList];
xinterp = Interpolation[Transpose[{arcLength, x}], InterpolationOrder -> 1];
yinterp = Interpolation[Transpose[{arcLength, y}], InterpolationOrder -> 1];
pts = Table[Through[{xinterp, yinterp}[t]], {t, 0, 1, 0.02}];

ListLinePlot[
 myList,
 Epilog -> {
   Red,
   PointSize[Medium],
   Point[pts]
   }]

Output

Few more alternatives:

1. GraphUtilities`LineScaledCoordinate

You can use the built-in function GraphUtilities`LineScaledCoordinate:

Needs["GraphUtilities`"]

n = 50;

lscpts = LineScaledCoordinate[myList, [email protected]#] & /@ Subdivide[n];

ListLinePlot[myList, Epilog -> {Red, [email protected]}]

enter image description here

Compare with pts from C.E.’s answer:

Total @ Chop[Abs[pts - lscpts]]
 {0, 0}

2. MeshFunctions -> {ArcLength}

llp = ListLinePlot[myList, MeshFunctions -> {ArcLength}, Mesh -> (n-1),  MeshStyle -> Red]

enter image description here

Extract the coordinates of mesh points and add the first and last element of myList:

mfpts = Join[{First @ myList}, 
   Cases[Normal[llp], Point[x_] :> x, All],
   {Last @ myList}];

Compare with lscpts:

Total @ Chop[Abs[lscpts - mfpts]]
{0, 0}

3. Illustrating JimB’s comment:

” just use the points as is and figure out which grid cell to which each belongs:

DensityHistogram[myList, {{1}, {1}}, Mesh -> All, 
 MeshStyle -> Directive[Thin, Gray], 
 ChartLegends -> BarLegend[Automatic, LegendMarkerSize -> {30, 350}], 
 Epilog -> {Red, PointSize[Medium], Point @ myList}, 
 AspectRatio -> Automatic, ImageSize -> Large]

enter image description here

Tagged : /

Code Bug Fix: Interpolate weighted spherical coordinates using softmax probabilities

Original Source Link

I would like to interpolate the spherical coordinates of various points using the probabilities calculated by a softmax layer at the end of a neural network to predict geo-coordinates of an Image. Let me explain better with an example.

I have 8909 labels that identify some regions of the USA territory with which the spherical coordinates x, y, z of the centroids of these regions are associated. My goal is to use the probabilities extracted from a softmax layer at the end of the neural network on these labels as weights to interpolate the x, y, z coordinates of the 8909 classes in order to obtain a point that is the weighted average (or a centroid weighted ) of these coordinates.

Then I have a multi-output CNN, one output is the classifier of the 8909 classes the other one is an array containing the 3D coordinates. I use a categorical cross-entropy to train classifier and a custom geodesic loss to train the 3D output in order to train the CNN on these tasks concurrently. The 3D output is obtained doing the dot product between the classes probabilities predicted by the softmax layer and the 3D coords of each classes.

I made several attempts however I get very wrong results on the validation set because the calculated coordinates are outside the USA borders and this is very strange because I should obtain coordinates beloging to USA. Could be this interpolation the problem? Who can help me?

Tagged : / / / /

Math Genius: Interpolation constraint in Hilbert space

Original Source Link

The paper “Geodesic Interpolating Splines” made the following comment on an interpolation problem:

Interpolation problem:

Let $mathcal{H}$ be a Hilbert space, let $f_1, dots, f_N in mathcal{H}$, and $c_1,dots,c_N in mathbb{R}$ be given.

Find $h in mathcal{H}$ such that $|h|$ is minimum subject to the constraints $langle f_i, h rangle = c_i$ for $i=1,dots,N$.

Comment:

It is indeed clear that the constraints are not affected if $h$ is replaced by $h + v$ where $v$ is orthogonal to all the $f_i$, so that the solution much be in fact be searched in the linear space spanned by $f_1, dots, f_N$ and express the unknown $h$ as a linear combination $h = sum_{i=1}^{N} alpha_i f_i$.

I understand that the interpolation constraint will remain the same if we replace $h$ by $h + v$. But, how this property implies that the solution should be searched in linear space?

Is it possible to prove their claim with the elementary level knowledge of Hilbert space?

Let $P$ be the orthogonal projection onto the span of $f_i$. If $h$ satisfies the constraints, then so does $P(h)$. But, $||P(h)||leq||h||$ with equality iff $h$ is in the span of the $f_i$. Thus, for any $h$ that satisfies the constraints outside the span, we can find an element of our Hilbert space with the smaller norm inside the span of the $f_i$.

Let $U$ denote the span of the elements $f_1,dots,f_N$. We know that if $h in H$ is a solution, then the element $h + v$ must also be a solution for any $v in V$.

However, the solution $h$ can necessarily be written in the form $h = h_1 + h_2$ with $h_1 in U$ and $h_2 in U^perp$. If we take $v = -h_2$, then we see that $h_1 = h + v$ must be a solution to the interpolation problem. In other words: if the interpolation problem has a solution $h$, then it must have a solution $h_1 in U$.

If you can find minimizer $h$ outside of the $text{span}{f_1,cdots,f_N}$, then since $langle f_i,hrangle=0=c_i$, we have $c_1=cdots=c_N=0$, which may be contradiction (if there exists $c_ineq0$ given) or such $h$ does not exists (because if $text{span}{f_1,cdots,f_N}^perpneqemptyset$, then take $aintext{span}{f_1,cdots,f_N}^perp$, and $||a/n||rightarrow 0intext{span}{f_1,cdots,f_N}$ satisfies the given conditions for all $ninmathbb{N}$), which is again a contradiction.

Tagged : / / / /

Code Bug Fix: Prevent negative values in df.interpolate()

Original Source Link

I’m having troubles with avoiding negative values in interpolation. I have the following data in a DataFrame:

current_country = 

idx Country     Region              Rank    Score     GDP capita    Family   Life Expect.    Freedom    Trust Gov.  Generosity  Residual    Year

289 South Sudan Sub-Saharan Africa  143     3.83200     0.393940    0.185190    0.157810    0.196620    0.130150    0.258990    2.509300    2016
449 South Sudan Sub-Saharan Africa  147     3.59100     0.397249    0.601323    0.163486    0.147062    0.116794    0.285671    1.879416    2017
610 South Sudan Sub-Saharan Africa  154     3.25400     0.337000    0.608000    0.177000    0.112000    0.106000    0.224000    1.690000    2018
765 South Sudan Sub-Saharan Africa  156     2.85300     0.306000    0.575000    0.295000    0.010000    0.091000    0.202000    1.374000    2019

And I want to interpolate the following year (2019) – shown below – using pandas’ df.interpolate()

new_row =

idx Country     Region              Rank    Score   GDP capita  Family     Life Expect.  Freedom    Trust Gov.  Generosity  Residual    Year

593 South Sudan Sub-Saharan Africa  0       np.nan  np.nan      np.nan     np.nan        np.nan     np.nan      np.nan      np.nan      2015

I create the df containing null values in all columns to be interpolated (as above) and append that one to the original dataframe before I interpolate to populate the cells with NaNs.

interpol_subset = current_country.append(new_row)
interpol_subset = interpol_subset.interpolate(method = "pchip", order = 2)

This produces the following df

idx Country     Region              Rank    Score     GDP capita    Family   Life Expect.    Freedom    Trust Gov.  Generosity  Residual    Year

289 South Sudan Sub-Saharan Africa  143     3.83200     0.393940    0.185190    0.157810    0.196620    0.130150    0.258990    2.509300    2016
449 South Sudan Sub-Saharan Africa  147     3.59100     0.397249    0.601323    0.163486    0.147062    0.116794    0.285671    1.879416    2017
610 South Sudan Sub-Saharan Africa  154     3.25400     0.337000    0.608000    0.177000    0.112000    0.106000    0.224000    1.690000    2018
765 South Sudan Sub-Saharan Africa  156     2.85300     0.306000    0.575000    0.295000    0.010000    0.091000    0.202000    1.374000    2019
4   South Sudan Sub-Saharan Africa  0       2.39355     0.313624    0.528646    0.434473   -0.126247    0.072480    0.238480    0.963119    2015

The issue: In the last row, the value in “Freedom” is negative. Is there a way to parameterize the df.interpolate function such that it doesn’t produce negative values? I can’t find anything in the documentation. I’m fine with the estimates besides that negative value (Although they’re a bit skewed)

I considered simply flipping the negative to a positive, but the “Score” value is a sum of all the other continuous features and I would like to keep it that way. What can I do here?

Here’s a link to the actual code snippet. Thanks for reading.

I doubt this is an issue for interpolation. The main reason is the method you were using. ‘pchip’ will return a negative value for the ‘freedom’ anyway. If we take the values from your dataframe:

import numpy as np
import scipy.interpolate

y = np.array([0.196620, 0.147062, 0.112000, 0.010000])
x = np.array([0, 1, 2, 3])
pchip_obj = scipy.interpolate.PchipInterpolator(x, y)
print(pchip_obj(4))

The result is -0.126. I think if you want a positive result you should better change the method you are using.

Tagged : / / / /

Server Bug Fix: Rescaling data to make a “continuous” function

Original Source Link

I have acquired many sets of data, which all represent a single function, but which are randomly rescaled by a constant (due to the measurement specifics). I’m looking to effectively stitch them together as a continuous sort of function by rescaling each data set, however this has proven difficult since their ranges don’t always overlap. Ideally something like:

Rescaling to continuous data

Where the resulting absolute scale doesn’t matter, but the structural features are important.

The obvious solution is to interpolate/extrapolate nearby curves, and minimize the differences between neighbors. However, I haven’t been able to make this work very well, as I’m not sure if there’s a good way to select which curves should be paired/minimized together. Any suggestions?

Example={{{2.04,3.94},{2.46,3.81},{2.89,3.56},{3.1,3.18},{3.44,2.81},{3.75,2.42},{3.91,2.03},{4.12,1.75},{4.59,1.44},{5.,1.28},{5.14,1.17}},{{0.23,5.26},{0.4,6.02},{0.65,6.81},{0.96,7.47},{1.3,7.86},{1.68,7.96},{1.82,8.08},{2.15,7.84},{2.47,7.39},{2.78,6.78},{3.1,6.11},{3.43,5.33},{3.86,4.61},{4.1,3.81}},{{3.21,7.62},{3.43,6.8},{3.72,5.7},{4.04,4.81},{4.32,3.99},{4.67,3.39},{4.94,2.97},{5.29,2.85},{5.51,2.77},{5.95,3.16},{6.05,3.36}},{{6.79,2.11},{6.98,2.32},{7.2,2.6},{7.66,2.62},{7.83,2.71},{8.21,2.63},{8.5,2.55},{8.62,2.34},{8.97,2.04}},{{7.63,4.03},{7.93,4.18},{8.2,4.02},{8.49,3.87},{8.77,3.46},{9.22,3.13},{9.35,2.51},{9.61,2.21},{9.95, 1.86}}};

UPDATE

flinty suggested one technique, whereby data could be attached in order (say from left-to-right), and I’ve attempted a quick and dirty rendition of this:

SortedData=SortBy[Example,First];(*Sort by minimum x position*)
Result=SortedData[[1]];(*Rescaled Final Data is initially the first dataset*)
For[i=2,i<=Length[SortedData],i++,
OverlappingPoints=Select[SortedData[[i]],#[[1]]<=Max[Result[[All,1]]]&];
(*Find overlapping points of next set to final set*)
Scaling=If[OverlappingPoints=={}, 
NArgMin[(Interpolation[Result][SortedData[[i,1,1]]]-s*SortedData[[i,1,2]])^2+(s*Interpolation[SortedData[[i]]][Result[[-1,1]]]-Result[[-1,2]])^2,s],
(*If no points overlap, extrapolate and fit the nearest points at each end*)
NArgMin[Total[(Interpolation[Result][#[[1]]]-s*#[[2]])^2&/@OverlappingPoints],s]];
(*If there is overlap, then only use that to fit*)
Result=Sort[Mean/@GatherBy[Join[Result,{1,Scaling}*#&/@SortedData[[i]]],First]]] 
(*Collect rescaled data together*)
ListLinePlot[Result,PlotStyle->Black]

flinty suggestion

This result does a pretty good job, although it has two possible issues:

  1. Fitting one additional curve at a time has trouble with regions where more than two curves overlap. This can be seen in the region around (x=5), where there is more noise compared to the same region fit by eye.

  2. Interpolation requires nonduplicate input, so data with the same x values cannot be interpolated together. I have gotten around this by simply averaging the scaled y-value when x is the same, but I expect that this may not be the best option.

SECOND UPDATE

aooiiii had a great approach, and I modified it a bit as QuadraticOptimization is a newer function that I can’t use at home. This uses NMinimize to minimize the error in scaling parameters (s) of the log-data, while regularizing the function (y) in several possible ways, using simple approximations of first (“flat”), second (“smooth”) and third (“jerk”) derivatives at neighboring points. The main difference is that while aooiiii used many y’s spanning between gaps in data, this version uses the input x positions to assign y points. I found the best-looking results using the third derivative (“jerk”), so the other regularization terms are commented out.

Stitch[d_]:=Module[{ss,sd,flat,smooth,jerk,errors,fit},
ss=Array[s,Length[d]];(*Scaling parameters*)
sd=Flatten[MapThread[{#[[All,1]],Log[#[[All,2]]]+#2}[Transpose]&,{d,ss}],1];(*Changing to a log scale so scaling can't approach zero*)
xs=Union[sd[[All,1]]];(*List of unique x-values*)
ys=Array[y,Length[xs]];(*Corresponding y-function*)
(*flat=Total[Function[{x1,y1,x2,y2},((y2-y1)/(x2-x1))^2]@@@Flatten[Partition[{xs,ys}[Transpose],2,1],{{1},{2,3}}]];(*Differences of nearby y-values*)*)
(*smooth=Total[Function[{x1,y1,x2,y2,x3,y3},(((x2-x1)(y3-y2)-(x3-x2)(y2-y1))/((x3-x2)(x3-x1)(x2-x1)))^2]@@@Flatten[Partition[{xs,ys}[Transpose],3,1],{{1},{2,3}}]];(*Differences of nearby slopes*)*)
jerk=Total[Function[{x1,y1,x2,y2,x3,y3,x4,y4},(((x3(y1-y2)+x1(y2-y3)+x2(y3-y1))/((x1-x2)(x1-x3))-(x4(y2-y3)+x2(y3-y4)+x3(y4-y2))/((x4-x2)(x4-x3)))/((x2-x3) (x4+x3-x2-x1)))^2] @@@Flatten[Partition[{xs,ys}[Transpose],4,1],{{1},{2,3}}]];(*Differences of nearby curvature*)
errors=Total[((sd[[All,1]]/[email protected]@@({xs,ys}[Transpose]))-sd[[All,2]])^2];(*Differences of function to data*)
fit=NMinimize[(*flat/100+smooth/100+*)jerk/1000+errors/.s[1]->0,Join[ys,ss[[2 ;;]]]][[2]];(*Minimize all differences*)
stitched={xs,Exp[ys]}[Transpose]/.fit;(*The optimized function*)
MapThread[{#[[All,1]],#[[All,2]]*#2}[Transpose]&,{d,Exp[ss]}]/.s[1]->0/.fit(*Rescaled data*)]

Grid[{{"Initial Data","Final Scaled Data"},{ListLinePlot[Example,ImageSize->250],Show[ListLinePlot[Stitch[Example],ImageSize->250],ListPlot[stitched,PlotStyle->Directive[PointSize[0.02],Black]]]}}]

Final Rescaling

A quick and dirty proof-of-concept implementation of my QuadraticOptimization idea. I haven’t given it much thought, and the algorithm may require improvements, such as irregular grid, logarithmic scale, deciding how much and what type of smothness penalty is needed etc. The part I’m unsure about the most is requiring the smoothed curve to be above 1. There are probably better ways to prevent the optimizer from setting all of the scaling coefficients to 0, thus pointlessly achieving zero smoothness penalty and zero error.

data = Map[{Round[100 #[[1]]], #[[2]]} &, Example, {2}];
{min, max} = MinMax[Map[First, data, {2}]];
(*Discretizing*)

smoothness = [email protected][(y[i] - 2 y[i + 1] + y[i + 2])^2, {i, min, max - 2}];
(*C2 smoothness penalty. One might combine several types of them here.*)

error = [email protected]@Table[
     (y[data[[i, j, 1]]] - s[i] data[[i, j, 2]])^2,
     {i, Length[data]},
     {j, Length[data[[i]]]}];

constr = Table[y[i] >= 1, {i, min, max}];

vars = Join[
   Table[y[i], {i, min, max}],
   Table[s[i], {i, Length[data]}]
   ];

sol = QuadraticOptimization[1000 smoothness + error, constr, vars];

patches = Table[{data[[i, j, 1]], data[[i, j, 2]] s[i]},
    {i, Length[data]},
    {j, Length[data[[i]]]}] /. sol;
smoothed = Table[{i, y[i]}, {i, min, max}] /. sol;

Show[{
  ListPlot[patches, Joined -> True], 
  ListPlot[smoothed, Joined -> True, 
   PlotStyle -> {Opacity[0.1], Thickness[0.05]}]
  }]

enter image description here

Here is an approach that estimates the multiplicative constants by taking the log of the response variable and estimates the resulting additive constants.

(* Take the log of the response so that the adjustment is additive 
   and include the adjustments for each set of data *)
(* Force the last data set to have an adjustment of 0 *)
data2 = data;
n = Length[data];
adj[n] = 0;
data2[[All, All, 2]] = Log[data[[#, All, 2]]] + adj[#] & /@ Range[Length[data]];

(* Determine the binning parameters *)
{xmin, xmax} = MinMax[data[[All, All, 1]]];
nBins = 20;
width = (xmax - xmin)/nBins;

(* Calculate total of the variances *)
t = Total[Table[Variance[Select[Flatten[data2, 1], 
  -width/2 <= #[[1]] - xmin - (i - 1) width <= width/2 &][[All, 2]]] /. Abs[z_] -> z,
  {i, 1, nBins + 1}]] /. Variance[{z_}] -> 0;

(* Minimize the total of the variances and plot the result *)
sol = FindMinimum[t, Table[{adj[i], 0}, {i, n - 1}]]
(* {0.0518024, {adj[1] -> 0.510144, adj[2] -> -0.157574, adj[3] -> -0.352569, adj[4] -> 0.447345}} *)

(* Plot results on original scale *)
data3 = data2;
data3[[All, All, 2]] = Exp[data2[[All, All, 2]] /. sol[[2]]];
ListPlot[data3, Joined -> True, PlotLegends -> Automatic]

Adjusted data

Tagged : /

Code Bug Fix: Linear interpolation between matrix indices

Original Source Link

I would like to know if there is a simple way to linearly interpolate between matrix coordinates.

The following points represent indices of a matrix:

10 40
0 40
0 30
0 20

Given these indices, I want to create a new list of indices:

10 40 # <--- from list above
9 40
8 40
...
1 40
0 40 # <--- from list above
0 39
0 38
...
0 31
0 30 # <--- from list above
0 29
...
0 21
0 20 # <--- from list above

So far, I tried the following which is not very fast and it seems to be an unnecessarily complicated approach:

import numpy as np
idx = np.array([[10, 40], [0, 40], [0, 30], [0, 20]])
idx2 = []
for i in range(len(idx)-1):
    idx2.extend(np.linspace(idx[i], idx[i+1], num=10, endpoint=False, dtype=np.int, axis=0))
idx2.append(idx[-1])
print(np.array(idx2))

Tagged : / /

Math Genius: Failing to demonstrate a minimization for 4-neighbors interpolation

Original Source Link

I need to verify a minimization problem in the domain of linear interpolations.
In particular, the formula I’m trying to demonstrate is reported in this paper, in the section “Upsampled FMM”.
Fast Marching Methods are algorithms used for image segmentation, path planning and more. I’m studying path planning techniques, and I already wrote lots of demonstrations regarding linear interpolation for the Field D* algorithm, so it seems odd to me that I fail to demonstrate something with a procedure I already know quite well (algebraic minimization).

4-neighbour interpolation image

Here’s the problem statement (as seen in the paper, in the image C=E, B=F, A=G):
$$
u_E=underset{0 < t < 0.5}{operatorname{min}}{u_Ft+u_G(0.5-t)+tau_Asqrt{t^2+(0.5-t)^2}}
$$

In principle, after having found the parameter $t$ that minimizes the formula, and substituting, the authors say that the closed-form solution is:

begin{align}
u_E &= frac{u_F + u_G + sqrt{frac{tau_A^2}{2} -(u_F-u_G)^2}}{2} & frac{tau_A^2}{2} -(u_F-u_G)^2 &>0\
u_E &= min(u_F,u_G)+ tau_A/2 &otherwise
end{align}

But unfortunately, I fail to demonstrate the above statement, on paper and even with symbolical solvers (Matlab).

I can provide more context, just ask and I’ll update the question if needed.

Tagged : / /