<blockquote class="ip-ubbcode-quote">
<div class="ip-ubbcode-quote-title">quote:</div>
<div class="ip-ubbcode-quote-content">Originally posted by Fiendish:<br><blockquote class="ip-ubbcode-quote">
<div class="ip-ubbcode-quote-title">quote:</div>
<div class="ip-ubbcode-quote-content">Originally posted by Macwarrior:<br><blockquote class="ip-ubbcode-quote">
<div class="ip-ubbcode-quote-title">quote:</div>
<div class="ip-ubbcode-quote-content">Originally posted by Fiendish:<br>Yes, it actually is. See figures 6 and 8 and the text in figure 4 in:<br>
www.cse.ucsc.edu/~milanfar/SR-challengesIJIST.pdf<br>(not my paper)<br> </div>
</blockquote>
<br><br>Well, that's amazing. It's not quite the same as the movies' using a low-res image to create high-res ones, since it relies on an image sequence, but still -- it looks like that part of machine vision is getting much much closer to hollywood's ideal. </div>
</blockquote>You don't know that they aren't looking at one of many video frames in the movies. -- View image here: http://episteme.arstechnica.com/groupee_common/emoticons/icon_wink.gif --<br>There are also techniques, like I said, that don't require more than one image. They just don't work as well in general. </div>
</blockquote>
<br><br>Actually, page 6 indicates that they're using temporal data from 45-frame (or more) sequences to do the analysis. Impressive nonetheless.<br><br><br><blockquote class="ip-ubbcode-quote">
<div class="ip-ubbcode-quote-title">quote:</div>
<div class="ip-ubbcode-quote-content">Originally posted by Horseradish:<br>(not to mention how the ruby sunglasses can absorb the energy without re-radiating it in some fashion).<br> </div>
</blockquote>
<br><br>It's even more strange in light (ha ha) of the fact that his optic beams are red -- you'd think that a dark blue or green lens would do a much better job of absorbing that energy, while a red lens would just pass it through.