Prosecution Insights
Last updated: April 19, 2026
Application No. 18/609,025

INFORMATION PROCESSING APPARATUS AND METHOD, AND STORAGE MEDIUM

Final Rejection §103
Filed
Mar 19, 2024
Examiner
TRUONG, KARL DUC
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
2y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
15 granted / 29 resolved
-10.3% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
74
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
85.3%
+45.3% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 29 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to the amendment filed on 30th January, 2026. Claims 1, 3-7, 11, and 17-18 have been amended. Claim 2 has been cancelled. Claims 1 and 3-18 remain rejected in the application. Response to Arguments Applicant's arguments with respect to Claims 1, 17, and 18 filed on 30th January, 2026, with respect to the rejection under 35 U.S.C. § 103, regarding that the prior art does not teach the limitation(s): "generate three-dimensional models of moving objects at a specific frequency based on a plurality of videos obtained from a plurality of image capture apparatuses", "identify movement with time passage in the three-dimensional model of each object", and "change the specific frequency of each three-dimensional model based on the movement with time passage in the three-dimensional model of each object, wherein the one or more processors execute the instructions to identify the movement with time passage in the three-dimensional model of the object based on a movement in a barycenter of the three-dimensional model of each object between different times" have been fully considered, but are moot because of new grounds for rejection. It has now been taught by the combination of Sugio and Kobayashi. Applicant's arguments with respect to Claim 11 filed on 30th January, 2026, with respect to the rejection under 35 U.S.C. § 103, regarding that the prior art does not teach the limitation(s): "the one or more processors execute the instructions to reduce the specific frequency as the identified movement with time passage in the three-dimensional model of the object becomes smaller" has been fully considered, but are moot because of new grounds for rejection. It has now been taught by the combination of Sugio and Lindh. Regarding arguments to Claims 3-10 and 12-16, they directly/indirectly depend on independent Claims 1, 11, and 17-18 respectively. Applicant does not argue anything other than independent Claims 1, 11, and 17-18. The limitations in those claims, in conjunction with combination, was previously established as explained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4, 7, and 13-18 are rejected under 35 U.S.C. 103 as being unpatentable over Sugio et al. (US 20190311526 A1, previously cited), hereinafter referenced as Sugio, in view of Kobayashi et al. (US 20190266786 A1), hereinafter referenced as Kobayashi. Regarding Claim 1, Sugio discloses an information processing apparatus (Sugio, [0101]: teaches a multi-viewpoint video imaging device 111 <read on information processing apparatus>), comprising: one or more memories storing instructions (Sugio, [0107]: teaches the multi-viewpoint video imaging device 111 including imaging device 121, which includes memory 124, which stores shooting settings <read on instructions>); and one or more processors executing the instructions to (Sugio, [0307]: teaches a processor for "reading out and executing the software program recorded in a recording medium such as a hard disk or a semiconductor memory"): generate three-dimensional models of moving objects at a specific frequency based on a plurality of videos obtained from a plurality of image capture apparatuses (Sugio, [0196]: teaches a three-dimensional space reconstructing device 115A including a first model generator 133 for generating a first model <read on 3D model of object>, a second model generator 134 for generating a second model, and a third model generator 135 for generating a third model, where the 3D models are generated based on a free-viewpoint video generating system 105 <read on obtained plurality of videos> as shown in FIG. 17; FIG. 17 teaches a multi-viewpoint video imaging device including a plurality of imaging devices <read on plurality of image capture apparatuses>; [0197]: teaches updating models at frequencies <read on specific frequency> with respect to specific regions (i.e., foreground and background regions)); PNG media_image1.png 388 544 media_image1.png Greyscale identify movement with time passage in the three-dimensional model of each object (Sugio, [0170]: teaches generating a differential model to determine a difference between a foreground model at the current time and a foreground model at a previous time <read on identify change with time passage>); and change the specific frequency of each three-dimensional model based on the movement with time passage in the three-dimensional model of each object (Sugio, [0200]: teaches the 3D models being updated at different frequencies (i.e., generating foreground model at 30 frames per second (FPS)) <read on changing specific frequency> based on whether the model is in the foreground or background; Note: it should be noted that although a "change" in frequency is not explicitly stated, one skilled in the art would understand that if a 3D model moves from the background region to the foreground region, the system would update the frequency of the 3D model), wherein [[the one or more processors execute the instructions to identify the movement with time passage in the three-dimensional model of the object based on a movement in a barycenter of the three-dimensional model of each object between different times.]] However, Sugio does not expressly disclose the one or more processors execute the instructions to identify the movement with time passage in the three-dimensional model of the object based on a movement in a barycenter of the three-dimensional model of each object between different times. Kobayashi discloses the one or more processors execute the instructions to identify the movement with time passage in the three-dimensional model of the object based on a movement in a barycenter of the three-dimensional model of each object between different times (Kobayashi, [0086]: teaches a viewpoint determination unit 52 calculating "a translation movement amount d <read on identify movement with time passage> from the position p1 of the barycenter 71A of the previous frame <read on between different times> to the position p1' of the barycenter 71A of the current frame on the depth image 72 in the x direction," where the object 71 can be a part of a 3D model; Note: it should be noted that the claim limitation is being interpreted as identifying movement, with respect to time, of a 3D model based on its barycenter calculated between different points in time). Kobayashi is analogous art with respect to Sugio because they are from the same field of endeavor, namely generating a plurality of viewpoints for a 3D environment. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to have the system split a 3D model into separate 3D bounding boxes as taught by Kobayashi into the teaching of Sugio. The suggestion for doing so would allow the system to better calculate the position of each part of the whole 3D model, thereby generating accurate 3D positional data that can be incorporated into the free-point video generating system. Therefore, it would have been obvious to combine Kobayashi with Sugio. Regarding Claim 17, it recites the limitations that are similar in scope to Claim 1, but in an information processing method. As shown in the rejection, the combination of Sugio and Kobayashi discloses the limitations of Claim 1. Additionally, Sugio discloses an information processing method (Sugio, [0040]: teaches an encoding method that processes information), comprising:… Thus, Claim 17 is met by Sugio according to the mapping presented in the rejection of Claim 1, given the information processing apparatus corresponds to an information processing method. Regarding Claim 18, it recites the limitations that are similar in scope to Claim 1, but in a non-transitory computer readable storage medium. As shown in the rejection, the combination of Sugio and Kobayashi discloses the limitations of Claim 1. Additionally, Sugio discloses a non-transitory computer readable storage medium storing a program for causing a computer to perform an information processing method (Sugio, [0107]: teaches the multi-viewpoint video imaging device 111 including imaging device 121, which includes memory 124 <read on non-transitory computer readable storage medium> that stores shooting settings <read on program>, generating <read on information processing method> a multi-viewpoint video), comprising:… Thus, Claim 18 is met by Sugio according to the mapping presented in the rejection of Claim 1, given the information processing apparatus corresponds to a non-transitory computer readable storage medium. Regarding Claim 3, the combination of Sugio and Kobayashi discloses the information processing apparatus of Claim 1. Sugio does not expressly disclose the limitations of Claim 3; however, Kobayashi discloses wherein the barycenter is a barycenter of a cuboid that circumscribes the three-dimensional model (Kobayashi, [0083]: teaches "the viewpoint determination unit 52 determines the main object to be an object having a largest bounding box <read on cuboid> among the plurality of objects"; [0084]: teaches for a current frame, "the object 71 moves in the x direction (the right direction in the drawing) in the camera coordinate system of a predetermined virtual camera of the previous frame, and a three-dimensional position s of a barycenter 71A of the object 71 on the world coordinate system moves to a three-dimensional position s'" as shown in FIG. 3; Note: it should be noted that examiner is interpreting the bounding box includes a barycenter). PNG media_image2.png 346 558 media_image2.png Greyscale Kobayashi is analogous art with respect to Sugio because they are from the same field of endeavor, namely generating a plurality of viewpoints for a 3D environment. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to have the system split a 3D model into separate 3D bounding boxes as taught by Kobayashi into the teaching of Sugio. The suggestion for doing so would allow the system to better calculate the position of each part of the whole 3D model, thereby generating accurate 3D positional data that can be incorporated into the free-point video generating system. Therefore, it would have been obvious to combine Kobayashi with Sugio. Regarding Claim 4, the combination of Sugio and Kobayashi discloses the information processing apparatus of Claim 1. Additionally, Sugio further discloses wherein the one or more processors further execute the instructions to: generate simplified three-dimensional models of each object that are coarser than a three-dimensional model used in generation of a virtual viewpoint video (Sugio, [0223]: teaches the three-dimensional distribution device changing the quality <read on simplified 3D model> of the 3D model based on a network band; Note: it should be noted that one skilled in the art would understand that a simplified 3D model is also a coarser 3D model), and identify the movement with time passage in the three-dimensional model of each object using the simplified three-dimensional models (Sugio, [0126]: teaches a foreground model that makes large motion changes <read on identify time delta in 3D model>, such as a person or a ball in motion). Regarding Claim 7, the combination of Sugio and Kobayashi discloses the information processing apparatus of Claim 1. Additionally, Sugio further discloses wherein in a case where the identified movement with time passage in the three-dimensional model of the object has been determined to be equal to or smaller than a threshold, the one or more processors execute the instructions to suspend generation of the three-dimensional model of the object (Sugio, [0114]: teaches event detector 113 detecting a displacement from camera 122, where when the sensing information exceeds a threshold value, the background region in the video is changed by the threshold value; Note: it should be noted that it is being interpreted where if the sensing information <read on identified time delta in 3D model> is below the threshold value, then no changes are made, thereby suspending generation of the background models). Regarding Claim 13, the combination of Sugio and Kobayashi discloses the information processing apparatus of Claim 1. Additionally, Sugio further discloses wherein the one or more processors further execute the instructions to: determine whether the object is a target of changing the specific frequency based on an attribute of the object or the three-dimensional model thereof (Sugio, [0202]: teaches a data transferor 119 determining the level of significance <read on target of changing specific frequency> depending on, for example, whether the target model is close to a specific feature point or an object such as a ball or is close <read on attribute of object> to a viewpoint position of many viewers; [0203]: teaches each of the models being a set of at least one object (e.g., a person, a ball, or an automobile) identified by object recognition or the like for foreground and background regions; Note: it should be noted that objects in the background region are being interpreted as objects not being the target of changing the specific frequency), and change the specific frequency with respect to the object of the determined target of changing (Sugio, [0200]: teaches the 3D models being updated at different frequencies (i.e., generating foreground model at 30 frames per second (FPS)) <read on changing specific frequency> based on whether the model is in the foreground or background). Regarding Claim 14, the combination of Sugio and Kobayashi discloses the information processing apparatus of Claim 13. Additionally, Sugio further discloses wherein the one or more processors execute the instructions to determine that the object is the target of changing the specific frequency in a case where a size of the three-dimensional model of the object is larger than a threshold (Sugio, [0114]: teaches event detector 113 detecting a displacement from camera 122, where when the sensing information exceeds a threshold value <read on larger than threshold>, the background region in the video is changed by the threshold value; Note: it should be noted that the change is being interpreted as the update frequencies of the models being increased). Regarding Claim 15, the combination of Sugio and Kobayashi discloses the information processing apparatus of Claim 13. Additionally, Sugio further discloses wherein the one or more processors execute the instructions to determine that the object is not the target of changing the specific frequency in a case where the three-dimensional model of the object has been determined to be a person (Sugio, [0203]: teaches each of the models being a set of at least one object (e.g., a person, a ball, or an automobile) identified by object recognition or the like for foreground and background regions; Note: it should be noted that objects in the background region are being interpreted as objects not being the target of changing the specific frequency). Regarding Claim 16, the combination of Sugio and Kobayashi discloses the information processing apparatus of Claim 14. Additionally, Sugio further discloses wherein the one or more processors further execute the instructions to: set the threshold based on the attribute of the object or the three-dimensional model of the object (Sugio, [0114]: teaches the event detector 113 detecting a displacement from camera 122, where when sensing information exceeds a threshold value, a background region in the video is changed by the threshold value or more <read on setting threshold>; Note: it should be noted that the determination between foreground and background objects is being interpreted as the determination of object attributes, such as movement speed). Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Sugio et al. (US 20190311526 A1, previously cited), hereinafter referenced as Sugio, in view of Kobayashi et al. (US 20190266786 A1), hereinafter referenced as Kobayashi as applied to Claim 1 above respectively, and further in view of Lindh (US 20220237849 A1, previously cited). Regarding Claim 5, the combination of Sugio and Kobayashi discloses the information processing apparatus of Claim 1. The combination of Sugio and Kobayashi does not expressly disclose the limitations of Claim 5; however, Lindh discloses wherein the one or more processors execute the instructions to identify the movement with time passage in each three-dimensional model of each object at a time interval longer than a time interval of frames of the plurality of videos (Lindh, [0086]: teaches system 40 performing an interpolation process between two temporally spaced animated states <read on time interval of 3D model being longer than time interval of frames of video>; Note: it should be noted that one skilled in the art would understand that interpolation requires an insertion of frames to make an animation appear more smooth, which would lengthen the animation frequency, thereby making the animation frequency length "longer" than the frame rate of the video). Lindh is analogous art with respect to Sugio, in view of Kobayashi because they are from the same field of endeavor, namely dynamic object animation update frequencies in videos. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement eye tracking to detect for the user's gaze of a video as taught by Lindh into the teaching of Sugio, in view of Kobayashi. The suggestion for doing so would allow the system to further determine areas of interest and apply dynamic animation update frequencies based on the gaze of the user, thereby further saving on overall rendering performance. Therefore, it would have been obvious to combine Lindh with Sugio, in view of Kobayashi. Regarding Claim 6, the combination of Sugio, Kobayashi, and Lindh discloses the information processing apparatus of Claim 5. Additionally, Sugio further discloses wherein the one or more processors execute the instructions to identify the movement with time passage in the three-dimensional model of each object at a time interval that is M times the time interval of the frames of the plurality of videos (where M is a natural number equal to or larger than two) based on a three-dimensional model obtained in a current frame and on a three-dimensional model obtained in a frame that is M frames ahead of the current frame (Sugio, FIG. 14 teaches foreground models being generated from time interval T 1 to T n + 1 <read on 3D models obtained in current frame and M frames ahead>; the variable n is being interpreted as M being a natural number greater than or equal to 2). PNG media_image3.png 272 587 media_image3.png Greyscale Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Sugio et al. (US 20190311526 A1, previously cited), hereinafter referenced as Sugio, in view of Kobayashi et al. (US 20190266786 A1), hereinafter referenced as Kobayashi as applied to Claim 7 above respectively, and further in view of Meyer (US 20080209149 A1, previously cited). Regarding Claim 8, the combination of Sugio and Kobayashi discloses the information apparatus of Claim 7. Additionally, Sugio further discloses wherein the one or more processors further execute the instructions to: store three-dimensional models that have been generated at times of respective frames (Sugio, [0124]: teaches the three-dimensional space reconstructing device 115 generating a 3D model of the shooting environment by using the model generation information obtained from event detector 113, and storing the generated 3D model; [0197]: teaches the three-dimensional space reconstructing device 115A updating the generated models at different frequencies within each viewpoint <read on respective frames>), wherein while generation of the three-dimensional model of the object is suspended, [[a pointer indicating a position of a latest three-dimensional model among three-dimensional models of the object that have already been stored is stored]] (Sugio, [0114]: teaches event detector 113 detecting a displacement from camera 122, where when the sensing information exceeds a threshold value, the background region in the video is changed by the threshold value; Note: it should be noted that it is being interpreted where if the sensing information is below the threshold value, then no changes are made, thereby suspending generation of the background models <read on 3D model generation suspension state>). However, the combination of Sugio and Meyer does not disclose while generation of the three-dimensional model of the object is suspended, a pointer indicating a position of a latest three-dimensional model among three-dimensional models of the object that have already been stored is stored. Meyer discloses while generation of the three-dimensional model of the object is suspended, a pointer indicating a position of a latest three-dimensional model among three-dimensional models of the object that have already been stored is stored (Meyer, [0012]: teaches using indirect/reference pointers that describes the location, size, and properties of an object <read on 3D model>; FIG. 4 teaches pointer-related read/write instructions for referenced objects, such as allocating space for a pointer reference and reading an already stored pointer). PNG media_image4.png 382 399 media_image4.png Greyscale Meyer is analogous art with respect to Sugio, in view of Kobayashi because they are from the same field of endeavor, namely generating virtual objects for software applications, such as video. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement an object-based processor architecture that allows exact pointer identification of objects as taught by Meyer into the teaching of Sugio, in view of Kobayashi. The suggestion for doing so would allow for a pointer generation unit (PGU) to create and assign pointers to new instantiated objects from memory, thereby improving overall rendering performance. Therefore, it would have been obvious to combine Meyer with Sugio, in view of Kobayashi. Regarding Claim 9, the combination of Sugio, Kobayashi, and Meyer discloses the information processing apparatus of Claim 8. Additionally, Sugio further discloses wherein the one or more processors further execute the instructions to: generate an image of the object viewed from a virtual viewpoint based on a three-dimensional model of the object (Sugio, FIG. 5 teaches rendering of a determined viewpoint <read on virtual viewpoint> of a 3D reconstructed scene <read on generate image of virtual viewpoint> that includes 3D objects <read on 3D model>), and PNG media_image5.png 468 400 media_image5.png Greyscale generate a virtual viewpoint video using the image (Sugio, FIG. 5 teaches a video display terminal displaying a viewpoint <read on generate virtual viewpoint> of a rendered viewpoint of the 3D reconstructed scene), wherein the one or more processors execute the instructions to generate the image of the object by reading out the stored three-dimensional model of the object (Sugio, [0307]: teaches structural components being implemented by a CPU/processor that reads out and executes software program <read on 3D model>; Note: it should be noted that it is common in the art to store 3D models in a software program), and [[in a case where the pointer is stored, the one or more processors execute the instructions to read out the three-dimensional model from the position instructed by the pointer.]] However, the combination of Sugio and Kobayashi does not expressly disclose in a case where the pointer is stored, the one or more processors execute the instructions to read out the three-dimensional model from the position instructed by the pointer. Meyer discloses in a case where the pointer is stored, the one or more processors execute the instructions to read out the three-dimensional model from the position instructed by the pointer (Meyer, FIG. 4 teaches reading a stored pointer of an object <read on 3D model>, which performs instructions that copies said pointer and performs a compare pointers operation). Meyer is analogous art with respect to Sugio, in view of Kobayashi because they are from the same field of endeavor, namely generating virtual objects for software applications, such as video. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement an object-based processor architecture that allows exact pointer identification of objects as taught by Meyer into the teaching of Sugio, in view of Kobayashi. The suggestion for doing so would allow for a pointer generation unit (PGU) to create and assign pointers to new instantiated objects from memory, thereby improving overall rendering performance. Therefore, it would have been obvious to combine Meyer with Sugio, in view of Kobayashi. Regarding Claim 10, the combination of Sugio, Kobayashi, and Meyer discloses the information processing apparatus of Claim 8. Additionally, Sugio further discloses wherein the one or more processors further execute the instructions to: generate an image of the object viewed from a virtual viewpoint based on a three-dimensional model of the object (Sugio, FIG. 5 teaches rendering of a determined viewpoint <read on virtual viewpoint> of a 3D reconstructed scene <read on generate image of virtual viewpoint> that includes 3D objects <read on 3D model>), and generating a virtual viewpoint video using the image (Sugio, FIG. 5 teaches a video display terminal displaying a viewpoint <read on generate virtual viewpoint> of a rendered viewpoint of the 3D reconstructed scene), wherein the one or more processors execute the instructions to generate the image of the object by reading out the stored three-dimensional model of the object (Sugio, [0307]: teaches structural components being implemented by a CPU/processor that reads out and executes software program <read on 3D model>; Note: it should be noted that it is common in the art to store 3D models in a software program), and [[in a case where the pointer is stored, the one or more processors execute the instructions to use a three-dimensional model that has already been read out instead.]] However, the combination of Sugio and Kobayashi does not expressly disclose in a case where the pointer is stored, the one or more processors execute the instructions to use a three-dimensional model that has already been read out instead. Meyer discloses in a case where the pointer is stored, the one or more processors execute the instructions to use a three-dimensional model that has already been read out instead (Meyer, [0079]: teaches a load pointer instruction from object cache <read on stored pointer case>; [0080]: teaches an object creation instruction <read on use 3D model> after loading the pointer). Meyer is analogous art with respect to Sugio, in view of Kobayashi because they are from the same field of endeavor, namely generating virtual objects for software applications, such as video. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement an object-based processor architecture that allows exact pointer identification of objects as taught by Meyer into the teaching of Sugio, in view of Kobayashi. The suggestion for doing so would allow for a pointer generation unit (PGU) to create and assign pointers to new instantiated objects from memory, thereby improving overall rendering performance. Therefore, it would have been obvious to combine Meyer with Sugio, in view of Kobayashi. Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Sugio et al. (US 20190311526 A1, previously cited), hereinafter referenced as Sugio, in view of Lindh (US 20220237849 A1, previously cited). Regarding Claim 11, Sugio discloses an information processing apparatus (Sugio, [0101]: teaches a multi-viewpoint video imaging device 111 <read on information processing apparatus>), comprising: one or more memories storing instructions (Sugio, [0107]: teaches the multi-viewpoint video imaging device 111 including imaging device 121, which includes memory 124, which stores shooting settings <read on instructions>); and one or more processors executing the instructions to (Sugio, [0307]: teaches a processor for "reading out and executing the software program recorded in a recording medium such as a hard disk or a semiconductor memory"): generate three-dimensional models of moving objects at a specific frequency based on a plurality of videos obtained from a plurality of image capture apparatuses (Sugio, [0196]: teaches a three-dimensional space reconstructing device 115A including a first model generator 133 for generating a first model <read on 3D model of object>, a second model generator 134 for generating a second model, and a third model generator 135 for generating a third model, where the 3D models are generated based on a free-viewpoint video generating system 105 <read on obtained plurality of videos> as shown in FIG. 17; FIG. 17 teaches a multi-viewpoint video imaging device including a plurality of imaging devices <read on plurality of image capture apparatuses>; [0197]: teaches updating models at frequencies <read on specific frequency> with respect to specific regions (i.e., foreground and background regions)); identify a movement with time passage in the three-dimensional model of each object (Sugio, [0170]: teaches generating a differential model to determine a difference between a foreground model at the current time and a foreground model at a previous time <read on identify change with time passage>); and change the specific frequency of each three-dimensional model based on the movement with time passage in the three-dimensional model of each object (Sugio, [0200]: teaches the 3D models being updated at different frequencies (i.e., generating foreground model at 30 frames per second (FPS)) <read on changing specific frequency> based on whether the model is in the foreground or background; Note: it should be noted that although a "change" in frequency is not explicitly stated, one skilled in the art would understand that if a 3D model moves from the background region to the foreground region, the system would update the frequency of the 3D model), wherein [[the one or more processors execute the instructions to reduce the specific frequency as the identified movement with time passage in the three-dimensional model of the object becomes smaller.]] However, Sugio does not expressly disclose the one or more processors execute the instructions to reduce the specific frequency as the identified movement with time passage in the three-dimensional model of the object becomes smaller. Lindh discloses the one or more processors execute the instructions to reduce the specific frequency as the identified movement with time passage in the three-dimensional model of the object becomes smaller (Lindh, [0082]: teaches an animation updating frequency of individual objects 1 being reduced by skipping rendered frames <read on reduce specific frequency of identified movement> so that no animation updating is performed for certain rendered frames <read on time passage becoming smaller>; [0077]: teaches decreasing the animation update frequency for specific shader animated objects 1 for parts of the scene that are not currently being actively viewed by the user). Lindh is analogous art with respect to Sugio because they are from the same field of endeavor, namely dynamic object animation update frequencies in videos. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement eye tracking to detect for the user's gaze of a video as taught by Lindh into the teaching of Sugio. The suggestion for doing so would allow the system to further determine areas of interest and apply dynamic animation update frequencies based on the gaze of the user, thereby further saving on overall rendering performance. Therefore, it would have been obvious to combine Lindh with Sugio. Regarding Claim 12, the combination of Sugio and Lindh discloses the information processing apparatus of Claim 11. Additionally, Sugio further discloses wherein the one or more processors execute the instructions to set the specific frequency to one of a first frequency at which the three-dimensional model is generated (Sugio, FIG. 14 teaches foreground and background models being generated from time interval T 1 to T n + 1 <read on setting specific frequency to first frequency>) for each of frames of the plurality of videos (Sugio, FIG. 14 teaches foreground and background models being generated from time interval T 1 to T n + 1 for a free-viewpoint video <read on setting specific frequency to first frequency for each frame of each video>), a state where generation of the three-dimensional model has been suspended (Sugio, [0114]: teaches event detector 113 detecting a displacement from camera 122, where when the sensing information exceeds a threshold value, the background region in the video is changed by the threshold value; Note: it should be noted that it is being interpreted where if the sensing information is below the threshold value, then no changes (i.e., animation/motion) are made, thereby suspending generation of the background models <read on 3D model generation suspension state>; in addition, it is common in the art to vary animation updates between 0 and a desired amount), and at least one frequency between the first frequency and the suspended state (Sugio, [0184]: teaches a differential model, which is a difference <read on frequency between first frequency and suspended state> between a foreground model at the current time and a foreground model at a previous time, where the foreground model at the current time is generated "by adding the foreground model at the previous time and the differential model"; [0185]: teaches a differential model for the background models). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wurmlin et al. (US 20090315978 A1) discloses generating a 3D representation of a dynamically changing 3D scene; and Aman et al. (US 20070279494 A1) discloses an automatic system that records an event area, where it classifies and tracks foreground and background objects in 3D. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARL TRUONG whose telephone number is (703)756-5915. The examiner can normally be reached 10:30 AM - 7:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.D.T./Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Mar 19, 2024
Application Filed
Oct 28, 2025
Non-Final Rejection — §103
Dec 29, 2025
Interview Requested
Jan 20, 2026
Examiner Interview Summary
Jan 20, 2026
Applicant Interview (Telephonic)
Jan 23, 2026
Response Filed
Feb 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573149
DATA PROCESSING METHOD AND APPARATUS, DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 10, 2026
Patent 12561875
ANIMATION FRAME DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12494013
AUTODECODING LATENT 3D DIFFUSION MODELS
2y 5m to grant Granted Dec 09, 2025
Patent 12456258
SYSTEMS AND METHODS FOR GENERATING A SHADOW MESH
2y 5m to grant Granted Oct 28, 2025
Patent 12444020
FLEXIBLE IMAGE ASPECT RATIO USING MACHINE LEARNING
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
83%
With Interview (+31.0%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 29 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month