Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
RESPONSE TO ARGUMENTS
The examiner acknowledges the amendment of claims 1, 4 & 25-26 filed 12/29/2025. Applicants arguments filed on (12/29/2025) have been fully considered but are deemed moot in view of new grounds of rejection. Due to the variation in claim scope via amendments a new ground of rejection is proper.
CLAIM REJECTIONS - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 23 & 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over Matsushita et al. (U.S. Publication 2018/0048810) in view of Merler et al. (U.S. Publication 2014/0010456) & Spitzer (U.S. Publication 2018/0136720)
As to claims 1 & 25-26, Matsushita discloses an image processing apparatus comprising: a processor (320, Fig. 3 & [0026]); and a memory (326, Fig. 3 & [0026]) built in or connected to the processor (320, Fig. 3 & [0026]), wherein the processor (320, Fig. 3 & [0026]) generates and outputs a virtual viewpoint image with reference to a position and an orientation of a target object, wherein the target object is included in an imaging region and a plurality of images on the basis of the plurality of images obtained by imaging the imaging region with a plurality of imaging devices of which at least either of imaging positions or imaging directions are different, (Abstract, Fig. 1-2, 6B & [0005] discloses a virtual viewpoint image generation system generates a virtual viewpoint image based on a plurality of captured images obtained by capturing an image capturing target region from a plurality of different directions. S103, Fig. 6B & corresponding disclosure discloses image. See [0065] wherein images such that the virtual viewpoint transitions with time are generated.)
and controls a display aspect of the virtual viewpoint image by generating and outputting the virtual viewpoint image with reference to the adjustment position and the adjustment orientation ([0067] discloses a display 323 is a functional unit for providing various information to a user. ([0028]). It is also possible to create images such that the virtual viewpoint transitions with time ([0065-0066]).
Matsushita is silent to generates an adjustment position and an adjustment orientation based on the position and the orientation of the target object by smoothing an amount of temporal changes in at least one of the position or the orientation of the target object.
However, Merler Abstract discloses generates an adjustment position and an adjustment orientation based on the position and the orientation of the target object by smoothing an amount of temporal changes in at least one of the position or the orientation of the target object. (Abstract, [0042], Fig. 3, 12A-12D discloses techniques for tracking one or more objects at each position in an interval in a video input with the use of a Kalman filter. Obtaining a first location estimate. Obtaining a second location estimate and a movement estimate…calculating a final estimate of a position and/or a velocity of the object with the Kalman filter.” ([0036-0042], Abstract)
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Matsushita’s disclosure to include the above limitations in order to improve tracking stability. (See [0042], Figs. 12A-12D)
Matsushita in view of Merler is silent to controls a display region of the virtual viewpoint image by controlling a resolution for each region included in the display region of the virtual viewpoint image.
However, Spitzer discloses controls a display region of the virtual viewpoint image by controlling a resolution for each region included in the display region of the virtual viewpoint image. ([0005] discloses controlling resolution by region in a displayed image via foveated rendering, where a focus region is rendered at higher resolution and outer/peripheral region(s) at lower resolution, expressly to reduce rendering/transmission cost while maintaining quality.)
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Matsushita in view of Merler’s disclosure to include the above limitations in order to reduce rendering and/or transmission workload while maintaining higher quality in the region of interest and lower quality in peripheral regions.
As to claim 2, Matsushita in view of Merler & Spitzer discloses everything as disclosed in claim 1. In addition, Matsushita discloses virtual viewpoint generation from multi-camera images with a target object position/orientation and display control of the virtual view (Figs. 1, 6A-6B & Corresponding Disclosure)
Matsushita in view of Merler & Spitzer is silent to wherein the processor controls the display aspect according to the amount of temporal changes smaller than an actual amount of temporal changes in the position and the orientation of the target object.
However, Merler’s Abstract, Fig. 1 & Corresponding Disclosure discloses temporal smoothing of tracked pose/position across video frames using a Kalman filter so that the displayed used trajectory jitters less than raw measurements.
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Matsushita in view of Merler & Spitzer’s disclosure to include the above limitations in order to reduce measurement noise while preserving underlying motion, improving visual stability without increasing camera count or bandwidth.
As to claim 23, Matsushita in view of Merler & Spitzer discloses everything as disclosed in claim 1 but is silent to wherein the processor generates and outputs a display screen in which the virtual viewpoint images are arranged in a time series.
However, Merler’s [0013] discloses wherein the processor generates and outputs a display screen in which the virtual viewpoint images are arranged in a time series. ([0013] discloses displaying an array of a plurality of frames of the input video.)
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Matsushita in view of Merler & Spitzer’s disclosure to include the above limitations in order to enhance navigation and scrubbing across time.
Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Matsushita et al. (U.S. Publication 2018/0048810) in view of Merler et al. (U.S. Publication 2014/0010456) & Spitzer (U.S. Publication 2018/0136720) as applied in claim 1, above further in view of Auberger (U.S. Publication 2007/0085927)
As to claim 4, Matsushita in view of Merler & Spitzer discloses everything as disclosed in claim 1 but is silent to wherein the processor smooths the amount of temporal changes by obtaining a moving average of an amount of time-series changes in the position and the orientation.
However, Auberger’s Fig. 2 & [0024] discloses wherein the processor smooths the amount of temporal changes by obtaining a moving average of an amount of time-series changes in the position and the orientation.
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Matsushita in view of Merler & Spitzer’s disclosure to include the above limitations in order to implement a low complexity temporal smoother.
As to claim 5, Matsushita in view of Merler & Spitzer discloses everything as disclosed in claim 1 but is silent to wherein the processor controls the display aspect of the virtual viewpoint image according to the amount of temporal changes in a case where the amount of temporal changes is within a predetermined range.
However, Auberger’s Fig. 1-2 & Corresponding Disclosure discloses wherein the processor controls the display aspect of the virtual viewpoint image according to the amount of temporal changes in a case where the amount of temporal changes is within a predetermined range.
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Matsushita in view of Merler & Spitzer’s disclosure to include the above limitations in order to suppress imperceptible micro updates and visual “nervousness” when motion is small, thereby conserving compute and keeping the virtual view visually stable during low motion periods while still allowing full updates when motion exceeds the range.
Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Matsushita et al. (U.S. Publication 2018/0048810) in view of Merler et al. (U.S. Publication 2014/0010456) & Spitzer (U.S. Publication 2018/0136720) as applied in claim 1, above further in view of Tantalo et al. (U.S. Publication 2002/0149693)
As to claim 6, Matsushita in view of Merler & Spitzer discloses everything as disclosed in claim 1 but is silent to wherein the processor changes a time interval for generating the virtual viewpoint image according to the amount of temporal changes.
However, Tantalo discloses wherein adaptive frame rate selection per motion between frames (displacement threshold), implemented by a frame rate clock. ([0035-0036, 0040-0043] & Fig. 5 discloses the displacement is compared to the maximum displacement desired. See if an object displacement exceeds a maximum desired displacement then the frame rate for the next frame is changed to one that produces acceptable amount of image displacement)
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Matsushita in view of Merler & Spitzer’s disclosure to include the above limitations in order to increase update cadence when motion is large so the synthesized viewpoint tracks scene dynamics with bounded per-frame displacement.
As to claim 7, Matsushita in view of Merler, Spitzer & Tantalo discloses everything as disclosed in claim 6 but is silent to in a case where the amount of temporal changes is equal to or more than a first predetermined value, the processor sets the time interval to be shorter than a first reference time interval.
However, Tantalo discloses interval reduction when motion warrants per frame tracking. ([0040], for example, See all disclosure regarding interval reduction)
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Matsushita in view of Merler, Spitzer & Tantalo’s disclosure to include the above limitations in order to reduce temporal aliasing (motion “skips”) and keep the synthesized viewpoint temporally coherent with the scene dynamics.
CONCLUSION
No prior art has been found for claims 8-22 & 24 in their current form.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Stephen P Coleman whose telephone number is (571)270-5931. The examiner can normally be reached Monday-Thursday 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Stephen P. Coleman
Primary Examiner
Art Unit 2675
/STEPHEN P COLEMAN/Primary Examiner, Art Unit 2675