These image ray gathers you mentioned are the same as common image gathers?
I'd say "yes" - I think in some cases the "-ray" part has been dropped in the technical write-ups
If the output be the flattened CDP gathers, so could we do some residual velocity analysis to update and get a better velocity model?
This is where people do "residual moveout analysis" or "RMO" to make a new velocity model.
That might be applied *directly* to the image gathers (by applying the residual moveout correction using the NMO equation) or you might use the RMO picks to *update* the model and run a new migration. That's all part of the "quality, time, cost" triangle that is constraining your project.
And, if the input being in Source Index Number (SIN) order, the output is still flattened CDP gathers
That depends on the internal working of the specific algorithm you have, as well as the details of the upstream software.
3DPreSTM reads inputs in any
order, and outputs offset planes - but my
software can also efficiently sort-on-read, so it doesn't make any difference.
So - I can run the preSTM from sail-lines in shot order, and then read the output directly into the velocity analysis tool for RMO analysis and make QC stacks.
Your situation might be different - good software tends to pay attention to this kind of detail as it makes workflows easier(!)
How can we obtain a final stack of the subsurface from these image ray gathers?
You stack them, usually with a mute, as you would with NMO corrected gathers.
Beware of the word "final" though; we have an *approximation* to the sub-surface geology. The "observable" is the normal ray scattered data.
Once you have applied a (mathematical) model to collapsed the wavefield based on a (velocity) model of the sub-surface you have an artifact that is certainly easier to interpret, but it's an approximation that is using an approximation.
Every time I use the word "final" I have ended up re-running for some reason!!!
However, is it always true that the results of using angle domain CIGs are better than common offset CIGs?
All algorithms have their pros and cons. They are all approximations. As a rule of thumb we aim to use the *fastest* and *cheapest* approach we can, and then if that doesn't produce the "quality" we want, we spend more time, effort and money. All a single paper can do is to show specific examples of where this technique worked better - they won't show you examples of where the improvements were marginal, and they won't have run it on *all possible* data.
I take the (slightly cynical) view that most papers are selling you something - it might be software, services or just the brilliance of the authors and their organisations. The papers that review techniques and discuss their pros and cons openly tend to be better - be wary of any paper that doesn't identify the limitations and drawbacks of the approach!