2 weeks ago, a large underground quarry (near the surface) collapsed.
I have already processed the area with a standard PS approach using ERS and ENVISAT data to see if something was already happening in the past.
But as it is a field area, I have not any PS in my results.
my suggestion would be first of all to look at interferograms.
case A. you have no coherence. Then the question is: is it on the mountains?
case A1. yes it’s on the mountains. Then download the last sw version and use the DEM-dependent coregistration refinement. This module requires a well aligned DEM. And it still needs some optimization (a bit slow at this moment). However, it helps reaching a better coherence.
case A2. no, no mountains, no coherence (or no coherence after coregistration refinement). Then, unless something wrong happened somewhere, no chance.
case B. there is coherence. then you have 2 options:
case B1. the coherence is great and you can unwrap very well your interferograms. Then do it, and analyze the UW (unwrapped) time series. Here you can use both parametric and non-parametric models (“smart” function). You can choose whether to use weights or not.
case B2. the coherence is fine but for some reasons it’s not good to unwrap. Then keep the wrapped phase, and carry out an analysis using the coherence as weight. In this way, the phase will be read from the wrapped interferograms. Here you must use a parametric model (linear or polynomial).
In both B1 and B2 cases, the processed interferograms are defined by the images graph that you have chosen in the current processing session. I suggest you to firstly process the interferograms (be careful about the InSAR processing options) and that you look at them to have a clear idea of your “prime materials”.
Which images graph to choose? There are many factors to consider, like the spatial coherence, but also the length of normal and temporal baselines. Good coherence but small baselines gives results but poor reliability. Long baselines give better estimates but lower coherence…
A good thing to do is trying the different options in the small area. So you can get a feeling on what they do and on the outputs you get without wasting too much time.
When you get clear ideas, you can process your whole area.
We are working at an adaptive filtering considering several factors of the dataset. I’ll post an update when it will be ready.