Prosecution Insights
Last updated: April 19, 2026
Application No. 18/132,071

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM

Final Rejection §103
Filed
Apr 07, 2023
Examiner
COFINO, JONATHAN M
Art Unit
2614
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
130 granted / 210 resolved
At TC average
Strong +32% interview lift
Without
With
+32.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
13 currently pending
Career history
223
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
64.7%
+24.7% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 210 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on/after Mar. 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Arguments Applicant’s arguments, see p., filed 15 December 2025, with respect to the title have been fully considered and are persuasive. The objection of 1 has been withdrawn. Applicant’s arguments, see pp. 10-13, filed 15 December 2025, with respect to the rejection(s) of claims 1 and 11-12 under 35 U.S.C. § 103 have been fully considered and are persuasive. Therefore, the rejection(s) have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Moore et al. (U.S. PG-PUB 2015/0201181). Please see the rationale for the rejection(s) of the newly-amended claim limitations in the Office action below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 USC 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, and 10-15 are rejected under 35 U.S.C. 103 as being unpatentable over Cier et al. (U.S. Patent 11,252,329; 'CIER') in view of Moore et al. (U.S. PG-PUB 2015/0201181, 'MOORE'). Regarding claim 11, CIER discloses an information processing method comprising: … processor(s) (CIER; FIG. 3, Col., Lines 60-62; “… hardware central processing unit(s) (“CPU”) or other hardware processors 305 …”) obtaining [3-D] scanned data obtained by sensing … object(s) (CIER; Col. 29, Lines 23-39; “As the pose of the phone at the time each frame is captured from its built-in camera can be determined from its visual data and/or IMU data, this allows the pose of the external camera to be determined similarly to the room shape matching method above. Depth/point cloud matching [‘[3-D] scanned data’]. The phone may be able to directly measure or deduce depth information, through techniques such as stereo, structured light, time-of-flight sensors (e.g., a Lidar sensor) [‘sensing … object(s)’]. … for external cameras, depth information can be inferred directly from still RGB images obtained from the camera. Both of these depth maps can be used to generate a point cloud, and an optimal correspondence between these two point-clouds can then be found, providing a relative pose. This technique could also be combined with RGB information from each, improving the information available to perform the matching.”); the … processor(s): referring to the [3-D] scanned data and identifying a [3-D] model corresponding to at least one object among the … object(s) (CIER; FIG. 1B; Col. 17, Lines 3-17; “… in addition, such objects … may further include other elements within the rooms, such as furniture 191-193 (e.g., a couch 191; chair 192; table 193 [‘the … object(s)’]; etc.), pictures or paintings or televisions or other objects 194 (such as 194-1 and 194-2) hung on walls, light fixtures, etc. The user may also optionally provide a textual or auditory identifier to be associated with an acquisition location, such as “entry” for acquisition location 210A or “living room” for acquisition location 210B, while in other embodiments the ICA system may automatically generate such identifiers (e.g., by automatically analyzing video and/or other recorded information for a building to perform a corresponding automated determination, such as by using machine learning) …”); and outputting the [3-D] model thus identified (CIER; Col. 31, Lines 15-20; “Combine geometry information obtained via SLAM on the phone with objects detected in the panoramic imagery to form a rich understanding of fixtures, furniture, and other items of interest within the house. This information can be used during visualization of the space for an end user …”), the identifying including: performing a process of object detection on the [3-D] scanned data, so as to identify the … object(s) (CIER; Col. 22, Lines 22-35; “Such a point cloud may be further analyzed to determine planar areas, such as to correspond to walls, the ceiling, floor, etc., as well as in some cases to detect features such as windows, doorways and other inter-room openings, etc.— … a first planar area 298 corresponding to the north wall of the living room is identified, with a second planar area 299 corresponding to windows 196-1 being further identified. … [otherwise], such an estimated 3D shape of the living room may be determined by using depth data captured by the mobile computing device 185 in the living room, whether in addition to or instead of using visual data of … image(s) captured by the camera device 186 and/or mobile computing device 185 in the living room.”), and PNG media_image1.png 512 609 media_image1.png Greyscale CIER does not explicitly disclose searching, based on the … object(s) and … [3-D] model candidates, for the [3-D] model corresponding to the … object(s), which MOORE discloses (MOORE; FIGS. 3A-3C; ¶ 0056; “When the processor 111 searches for an object model [amongst the] object models {[3-D] model candidates}, … object model(s) may be similar in shape or structure to a portion of the first visual data 306. … a body of a bottle (e.g., the target object 310) may be similar in shape/structure to either a cylinder or a box {[3-D] model candidates}. The processor 111 … determined which of the … object models have the closest fit for the analyzed portion of the first visual data 306. … the processor 111 may assign a score ([e.g.], a recognition accuracy percentage) as to the degree of similarity between a particular object model of the … object models and the analyzed portion of the first visual data 306. … the processor 111 may choose the object model … associated with the highest associated score (e.g., recognition accuracy percentage), as the object model that corresponds to the analyzed portion of the first visual data 306. … the processor 111 determines the parameters of the chosen object model.”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the information processing method of CIER to include the searching, based on the … object(s) and … [3-D] model candidates, for the [3-D] model corresponding to the … object(s) of MOORE. The motivation for this modification is to provide a device which can recognize objects for increased environmental awareness and obstacle avoidance (MOORE; ¶ [0006]). Independent claim 1, after its preamble, recites essentially similar limitations when compared to independent claim 11; therefore, the same motivation to combine references will be maintained. Regarding claim 1, CIER-MOORE disclose an information processing device comprising … processor(s) (CIER; FIG. 3, ‘server computing system(s) 300’, ‘hardware processors 305’; Col. 34, Lines 50-67 ~ Col. 35, Lines 1-6) … to execute: … ([The following limitations are substantially similar to those recited in independent claim 11.]). Independent claim 12, after its preamble, recites essentially similar limitations when compared to independent claim 11; therefore, the same motivation to combine references will be maintained. Regarding claim 12, CIER-MOORE disclose a computer-readable non-transitory storage medium in which a program is stored, the program causing a computer to function as an information processing device (CIER; Col. 37, Lines 45-53; “… systems and data structures may also be stored (e.g., as software instructions or structured data) on a non-transitory computer-readable storage mediums, such as a hard disk or flash drive or other non-volatile storage device, volatile or non-volatile memory (e.g., RAM or flash RAM), a network storage device, or a portable media article (e.g., a DVD disk, a CD disk, an optical disk, a flash memory device, etc.) to be read by an appropriate drive or via an appropriate connection.”), the program causing the computer to execute: … ([The following limitations are substantially similar to those recited in independent claim 11.]). Regarding claim 3, CIER-MOORE disclose the information processing device according to claim 1, wherein: in the object detecting process, the … processor(s) accept information designating a partial space in a subject space including the … object(s) (MOORE; ¶ 0059; “FIG. 3C illustrates a result of the category object recognition. The processor 111 may recognize that the target object 310 is similar to one of the object models. The first enclosure 350 may be a bounding box, a bounding circle [‘partial space in a subject space’] … The first enclosure 350 has a first center point 316. When the first enclosure 350 is a bounding box, the first center point 316 is the point with approximately equal distance from each side of the bounding box. When the first enclosure 350 is a bounding circle, the first center point 316 may be the center of the bounding circle. … the processor 111 may determine the first center point 316 such that the first center point 316 is positioned on, corresponds to, or falls within a portion of the visual data set 304 corresponding to the target object 310. The target object 310 may … be positioned within, around, or adjacent to the first enclosure 350. The processor 111 determines that a first target data (which is a portion of the first visual data 306) corresponds to the target object 310 to recognize the target object 310.”), … apply the process of object detection to [3-D] scanned data corresponding to the partial space identified on a basis of the information designating the partial space (CIER; Col. 22, Lines 10-26; “… images … captured in the living room of the house 198 [‘information designating the partial space’] [are] analyzed … to determine an estimated 3D shape of the living room, such as from a 3D point cloud of features detected in the video frames [‘applies the process of object detection to [3-D] scanned data’] (e.g., using SLAM and/or SfM and/or MVS techniques, and optionally further based on IMU data captured by the mobile computing device 185). … information 255k reflects an example portion of such a point cloud for the living room, … to correspond to a northwesterly portion of the living room (e.g., to include northwest corner 195-1 of the living room, as well as windows 196-1) [‘the partial space’] in a manner similar to image 250c of FIG. 2C. Such a point cloud may be further analyzed to determine planar areas, such as to correspond to walls, the ceiling, floor, etc., as well as in some cases to detect features such as windows, doorways and other inter-room openings, etc.”). Regarding claim 10, CIER-MOORE disclose the information processing device according to claim 1, wherein: in the output process, the … processor(s) further output [3-D] data corresponding to at least part of the [3-D] scanned data (CIER; Col. 31, Lines 15-20; “… Combine geometry information obtained via SLAM on the phone with objects detected in the panoramic imagery to form a rich understanding of fixtures, furniture, and other items of interest within the house. This information can be used during visualization of the space for an end user”). Regarding claim 13, CIER-MOORE disclose the information processing device according to claim 1, wherein: in the searching process, the … processor(s) perform the searching, in … [3-D] model candidates, for the [3-D] model corresponding to the … object(s) so as to output the [3-D] model corresponding to the [3-D] scanned data (MOORE; ¶ 0056; “… the processor 111 searches for an object model of the … object models {‘[3-D] model candidates’}, more than one object model may be similar in shape/structure to a portion of the first visual data 306. … the processor 111 … chooses [‘output’] the object model of the … object models associated with the highest associated score (e.g., recognition accuracy percentage), as the object model that corresponds to the analyzed portion of the first visual data 306.”) obtained by sensing … object(s) (MOORE; ¶ 0036; “… smart necklace 100 includes two pairs of stereo cameras 121, which … provide depth information … The stereo cameras 121 may face forward … to establish a field of view (FOV). … The stereo cameras 121 provide 3D information such as depth in front of the user.”). Regarding claim 14, CIER-MOORE disclose the information processing device according to claim 1, wherein: in the detecting/searching process, the … processor(s) detect identification information (MOORE; FIG. 2; ¶ 0046-47; “… at block 220, … the smart necklace 100 … detects a candidate object … based on the image data received at block 210. … the onboard processing array 110 may detect the candidate object by identifying a candidate region [‘identification information’] of the received image data, such as a region of the image that includes high entropy. … the onboard processing array 110 may detect a high entropy region in the acquired target image data that includes a spray bottle. … the onboard processing array 110 may utilize a sliding window algorithm to identify the candidate region of the received image data. … the onboard processing array 110 may detect the candidate object by utilizing a feature descriptor algorithm or an image descriptor algorithm … The onboard processing array 110 may bias detections to … spatially located region(s) of interest based on application, scene geometry and/or prior information. The onboard processing array 110 includes … object detection parameter(s) to facilitate the detection of the candidate object. … the … object detection parameter(s) is/are a window size, a noise filtering parameter, an estimated amount of light, an estimated noise level, a feature descriptor parameter, an image descriptor parameter …”), and … search the [3-D] model based on the identification information (MOORE; ¶ 0049; “… the onboard processing array 110 may recognize the candidate object by utilizing a feature descriptor algorithm or an image descriptor algorithm … In which the onboard processing array 110 utilizes a feature descriptor or image descriptor algorithm, the onboard processing array 110 may extract a set of features from a candidate region identified by the onboard processing array 110 [which] may then access a reference set of features of an object recognition reference model from an object recognition database … and then compare the extracted set of features with the reference set of features of the object recognition reference model [‘search the [3-D] model based on the identification information’] … The onboard processing array 110 may extract a set of features from the high entropy region of the acquired target image data that includes a bottle and compare the extracted set of features to reference sets of features for … reference bottle model(s). When the extracted set of features match the reference set of features, the onboard processing array 110 may recognize an object (e.g., recognizing a bottle when the extracted set of features from the high entropy region of the acquired target image data that includes the bottle match the reference set of features for a reference bottle model).”). Regarding claim 15, CIER-MOORE disclose the information processing device according to claim 1, wherein: in the detecting/searching process, the … processor(s) detect a name of the object, and … search the [3-D] model based on the name (MOORE; ¶ 0050; “… the object recognition module may assign an identifier to the recognized object [‘detect a name of the object’]. … the identifier may be an object category identifier (e.g., "bottle" when the extracted set of features match the reference set of features for the "bottle category" or "cup" when the extracted set of features match the reference set of features for the "cup" object category) [‘search the [3-D] model based on the name’] …”). Claims 2, 4, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over CIER in view of MOORE as applied to claims 1 and 6 above, respectively, and further in view of Mott et al. (U.S. Patent 9,734,634; 'MOTT'). Regarding claim 2, CIER-MOORE disclose the information processing device according to claim 1; however, CIER-MOORE do not explicitly disclose that in the searching process, the … processor(s) perform(s) the PNG media_image2.png 411 484 media_image2.png Greyscale searching with use of similarities between the [3-D] model corresponding to the … object(s) and the … [3-D] model candidates; and in the identifying process, the … processor(s) perform a presenting process and a selecting process, the presenting process presenting, to a user, … [3-D] model candidates having a relatively high similarity and the selecting process selecting, on a basis of an input from the user, the [3-D] model corresponding to the … object(s), which MOTT discloses (MOTT; FIGS. 7-9; “… some embodiments may allow a user to select the size of a container 970, and submit a search term to a search engine which returns results in the form of objects 960, 962 that fit into the selected size of a container 970 [‘selecting, on a basis of an input from the user, the [3-D] model corresponding to the at least any one of the … object(s)’]. … a user may enter the size of a container 970, either while a camera environment is showing or otherwise (e.g., at a product page or other website), and then a user may enter a search term such as “brown couches.” In response to receiving a search term and dimensions, objects 960, 962 may be shown to a user, where objects 960 and 962 are brown couches that would fit into the selected size of a container 970 [‘searching with use of similarities between the [3-D] model and the … [3-D] model candidates’]. … the objects may be shown in an example user interface 920, such that the search results may be placed into a 3D container 970. … when searching based on a term and the dimensions of a container, only the objects which have associated representations of an object 980 may be displayed. Similarly, … other controls may be used to narrow the objects included in a search result such as a minimum size, a color, a cost, a retailer, a brand, a manufacturer, a maximum size of a particular dimension (e.g., maximum height, maximum width, maximum depth) [‘presenting, to a user, … [3-D] model candidates having a relatively high similarity’], etc. … A suggestion engine may be implemented to provide recommendations based at least in part upon information known about a user/user ID, the dimensions of a container 970, a search term, …”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the information processing device according to claim 1 of CIER-MOORE to include the searching with use of similarities between the [3-D] model corresponding to the … object(s) and the … [3-D] model candidates, and the in the identifying process, … performing a presenting process and a selecting process, the … presenting, to a user, … [3-D] model candidates having a relatively high similarity and the … selecting, on a basis of an input from the user, the [3-D] model corresponding to the … object(s) of MOTT. The motivation for this modification is to provide systems and methods for displaying 3D containers in a computer-generated environment. A computing device may provide a user with a catalog of objects which may be purchased. To view what an object may look like prior to purchasing the object, a computing device may show a 3D container that has the same dimensions as the object. The 3D container may be located and oriented based on a 2-D marker. Moreover, some 3D containers may contain a representation of an object, which may be a 2D image of the object (MOTT, see Abstract). More specifically, MOTT teaches a system for providing bounding boxes/shapes for containing/enveloping consumer products to be virtually overlaid onto a captured image of a user’s physical environment. This system allows for a user to rapidly assess whether the consumer product may fit into a user’s environment without necessitating a full render of the consumer object, which may contain complex textures, geometries, colorations, etc. PNG media_image2.png 411 484 media_image2.png Greyscale Regarding claim 4, CIER-MOORE disclose the information processing device according to claim 1; however, CIER-MOORE do not explicitly disclose that in the output process, the … processor(s) accepts user's selection of the [3-D] model output in the output process, … outputs … other [3-D] model candidate(s) corresponding to the [3-D] model thus selected, … accepts user's selection of any one of the … other [3-D] model candidate(s), and … outputs, as the [3-D] model, the selected one of the … other [3-D] model candidate(s), which MOTT discloses (MOTT; Col. 13, Lines 37-62; “… some embodiments may allow a user to select the size of a container 970, and submit a search term to a search engine which returns results in the form of objects 960, 962 that fit into the selected size of a container 970 [‘accepts user's selection of the [3-D] model output in the output process’]. … a user may enter the size of a container 970, either while a camera environment is showing or otherwise (e.g., at a product page or other website), and then a user may enter a search term such as “brown couches.” In response to receiving a search term and dimensions, objects 960, 962 may be shown to a user, where objects 960 and 962 are brown couches that would fit into the selected size of a container 970 [‘outputs … other [3-D] model candidate(s) corresponding to the [3-D] model thus selected’]. … the objects may be shown in … [UI] 920, such that the search results may be placed into a 3D container 970. … when searching based on a term and the dimensions of a container, only the objects which have associated representations of an object 980 may be displayed [‘outputs, as the [3-D] model, the selected one of the … other [3-D] model candidate(s)’]. Similarly, … other controls may be used to narrow the objects included in a search result such as a minimum size, a color, a cost, a retailer, a brand, a manufacturer, a maximum size of a particular dimension (e.g., maximum height, maximum width, maximum depth) [‘accepts user's selection of any one of the … other [3-D] model candidate(s)’], etc. … A suggestion engine [is] implemented to provide recommendations based … in part upon information known about a user/user ID, the dimensions of a container 970, a search term … [etc.]”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the information processing device according to claim 1 of CIER-MOORE to include the various teachings of MOTT. The motivation for this modification is to provide systems and methods for displaying 3D containers in a computer-generated environment. A computing device may provide a user with a catalog of objects which may be purchased. To view what an object may look like prior to purchasing the object, a computing device may show a 3D container that has the same dimensions as the object. The 3D container may be located and oriented based on a 2-D marker. Moreover, some 3D containers may contain a representation of an object, which may be a 2D image of the object (MOTT, see Abstract). More specifically, MOTT teaches a system for providing bounding boxes/shapes for containing/enveloping consumer products to be virtually overlaid onto a captured image of a user’s physical environment. This system allows for a user to rapidly assess whether the consumer product may fit into a user’s environment without necessitating a full render of the consumer object, which may contain complex textures, geometries, colorations, etc. Regarding claim 16, CIER-MOORE-MOTT disclose the information processing device according to claim 2, wherein: after the selecting process, the … processor(s) output the [3-D] model candidate having relatively high similarity (MOORE; ¶ 0056; “When the processor 111 searches for an object model of the … object models, more than one object model may be similar in shape or structure to a portion of the first visual data 306. … a body of a bottle (e.g., the target object 310) may be similar in shape or structure to either a cylinder or a box. The processor 111 … determines which of the … object models have the closest fit for the analyzed portion of the first visual data 306. … the processor 111 may assign a score ([e.g.], a recognition accuracy percentage) as to the degree of similarity between a particular object model of the plurality of object models and the analyzed portion of the first visual data 306. … the processor 111 may choose the object model of the plurality of object models associated with the highest associated score (e.g., recognition accuracy percentage) [‘output the [3-D] model candidate having relatively high similarity’], as the object model that corresponds to the analyzed portion of the first visual data 306.”). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over CIER in view of MOORE and MOTT as applied to claim 4 above, and further in view of Hatfield et al. (U.S. PG-PUB 2022/0122096, 'HATFIELD'). Regarding claim 5, CIER-MOORE-MOTT disclose the information processing device according to claim 4; however, CIER-MOORE-MOTT do not explicitly disclose that the … processor(s) execute a learning process of causing a similarity determination model used in the searching process to perform learning with use of the information supplied from the user, the information indicating which of the … other [3-D] model candidate(s) has been selected, which HATFIELD discloses (HATFIELD; ¶ 0041; “Virtual environment rendering system (VERS) 112 … renders a virtual environment corresponding to a physical environment. … VERS 112 may utilize input data that includes multiple images … of a premises, including panoramic and/or 360-degree video. … the input data can include feeds from multiple digital video cameras disposed throughout a home of a user, capturing multiple views of the premises, including multiple views of the same scene from different viewing angles, providing physical environment information to the VERS 112 [which] may interface with machine learning system 122 to perform image analysis to identify and/or classify objects within the video feeds. These objects may then be indicated in the virtual environment with additional highlighting and/or annotation. The VERS 112 may then replace an object from the physical environment with a different object to perform a product performance estimation [‘learning process of causing a similarity determination model used in the searching process to perform learning with use of the information supplied from the user’]. … a physical environment of a kitchen may include a refrigerator of type “Model A.” The VERS 112 may substitute the “Model A” refrigerator with a “Model B” refrigerator [‘other [3-D] model candidate(s) has been selected’], allowing a user to interact with the “Model B” refrigerator in a virtual environment that is very similar to his/her own kitchen, but with the “Model B” refrigerator in place of the user's own “Model A” refrigerator. … the user can better visualize the “Model B” refrigerator in his/her own kitchen.”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the information processing device according to claim 4 of CIER-MOORE-MOTT to include the executing a learning process of causing a similarity determination model used in the searching process to perform learning with use of the information supplied from the user, the information indicating which of the … other [3-D] model candidate(s) has been selected of HATFIELD. The motivation for this modification is to allow a user to perform certain tasks in a digital twin virtual reality environment, such as opening and closing doors of a virtual refrigerator (as opposed to the actual refrigerator in the physical environment), operating the refrigerator's ice dispenser, and/or other relevant tasks before purchasing the alternative physical refrigerator (HATFIELD; ¶ [0041]). Claims 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over CIER in view of MOORE as applied to claim 1 above, and further in view of Murugappan et al. (U.S. PG-PUB 2023/0093342, 'MURUGAPPAN'). Regarding claim 6, CIER-MOORE disclose the information processing device according to claim 1; however, CIER-MOORE do not explicitly disclose that the … processor(s) … execute a generating process of generating an image which is stereoscopically visible, which MURUGAPPAN discloses (MURUGAPPAN; ¶ 0124; “The location [is] determined in a different manner, if the remote visualization is [3-D] (e.g., using a stereoscopic display). … with visual depth being available, the user may [3-D] place the virtual sub-image, thus potentially making the projection unnecessary.”), which is obtained by referring to the [3-D] scanned data, and in which the … object(s) is/are replaced with the [3-D] model (MURUGAPPAN; FIG. 5A; ¶ 0096; “In Step 508, a hybrid frame is generated, using the image frame, the first spatial registration, and the updated object model. … in the hybrid frame, some depictions of objects in the image frame may be replaced by depictions of the corresponding updated object models. … the depiction of the first object (which may be of particular relevance), may be replaced by the depiction of the corresponding updated object model. In contrast, depictions of other objects (which may or may not be identifiable) in the image frame may not be replaced. … the hybrid frame includes the image frame, with the depiction of the first object replaced by the depiction of the updated object model and/or depictions of other objects replaced by depictions of corresponding object models [‘object(s) is/are replaced with the [3-D] model’]. The hybrid frame may form a digital replica of the operating environment including the components in the operating environment, and in which the 3D representation (e.g., 3D point cloud [‘referring to the [3-D] scanned data’] …) of the first object and/or other object(s) are replaced by depictions of corresponding models that are updated to reflect the current state of the first object and/or other object(s). In the hybrid frame, the depiction of the first object, replaced by the depiction of the updated object model may serve as a spatial reference. Other elements, e.g., the object model(s) [are] referenced relative to the updated object model.”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the information processing device according to claim 1 of CIER-MOORE to include the generating an image which is stereoscopically visible, which is obtained by referring to the [3-D] scanned data, and in which the … object(s) is/are replaced with the [3-D] model of MURUGAPPAN. The motivation for this modification is to implement a system for facilitating remote presentation of a physical world that includes a first object and an operating environment of the first object. The facilitation system includes a processing system obtaining an image frame depicting the physical world, identify a depiction of the first object in the image frame, and obtain a first spatial registration registering an object model with the first object in the physical world. The object model is of the first object. The processing system further obtains an updated object model corresponding to the object model updated with a current state of the first object, and generate a hybrid frame using the image frame, the first spatial registration, and the updated object model. The hybrid frame includes the image frame with the depiction of the first object replaced by a depiction of the updated object model (MURUGAPPAN, Abstract). Regarding claim 8, CIER-MOORE-MURUGAPPAN disclose the information processing device according to claim 6, wherein: a process, included in the generating process, of replacing the … object(s) with the [3-D] model includes a process of generating a texture image on a basis of , and a process of pasting, to a surface of the [3-D] model, the texture image thus generated (CIER; Col. 23, Lines 62-67 ~ Col. 24, Lines 1-11; “FIG. 2-O continues the examples of FIGS. 2A-2N, and illustrates additional information 265o that [is] generated from the automated analysis techniques disclosed herein and displayed (e.g., in a GUI similar to that of FIG. 2N), which … is a 2.5D or 3D model floor plan of the house. Such a model 265o may be additional mapping-related information that is generated based on the floor plan 230m and/or 230n, with additional information about height shown … to illustrate visual locations in walls of features such as windows and doors. While not illustrated in FIG. 2-O, additional information may be added to the displayed walls … such as from images taken during the video capture (e.g., to render and illustrate actual paint, wallpaper or other surfaces from the house on the rendered model 265) [‘generating a texture image on a basis of … an image obtained independently of the [3-D] scanned data’], and/or may otherwise be used to add specified colors, textures or other visual information to walls and/or other surfaces [‘pasting, to a surface of the [3-D] model, the texture image thus generated’].”). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over CIER in view of MOORE and MURUGAPPAN as applied to claim 6 above, and further in view of MOTT. Regarding claim 7, CIER-MOORE-MURUGAPPAN disclose the information processing device according to claim 6; however, CIER-MOORE-MURUGAPPAN do not explicitly disclose that a process, included in the generating process, of replacing the … object(s) with the [3-D] model: includes a process of referring to the [3-D] scanned data and changing at least either of a size and a shape of the [3-D] model, and a process of … at least either of the size and the shape of which has been changed, which MOTT discloses (MOTT; FIG. 9; Col. 13, Lines 13-35; “In addition to control 940, … [UI] 920 also includes a second control 990. … both controls … modify the size of container 970. … a user … enters a dimension of a container 970 using the character fields in control 940. … the arrows in control 940 … adjust the dimensions of a container 970. … Control 990 … [is] manipulated (e.g., by dragging … handle(s)) … to modify the size, dimensions, location, and/or orientation of container 970. … a user may modify the size (e.g., dimensions) of a container 970 simply by manipulating the container 720 itself, such as by using a touch screen to drag a corner of a container 970. … The objects 960, 962 … change based upon the size of the container 970. … if a user makes container 970 smaller/larger by manipulating control 940/990, the objects 960, 962 in section 950 (e.g., couches that fit inside the container 970) may change based on a smaller or larger container 970.”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the information processing device according to claim 6 of CIER-MOORE-MURUGAPPAN to include the various teachings of MOTT. The motivation for this modification is to provide systems and methods for displaying 3D containers in a computer-generated environment. A computing device may provide a user with a catalog of objects which may be purchased. To view what an object may look like prior to purchasing the object, a computing device may show a 3D container that has the same dimensions as the object. The 3D container may be located and oriented based on a 2-D marker. Moreover, some 3D containers may contain a representation of an object, which may be a 2D image of the object (MOTT, see Abstract). More specifically, MOTT teaches a system for providing bounding boxes/shapes for containing/enveloping consumer products to be virtually overlaid onto a captured image of a user’s physical environment. This system allows for a user to rapidly assess whether the consumer product may fit into a user’s environment without necessitating a full render of the consumer object, which may contain complex textures, geometries, colorations, etc. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over CIER in view of MOORE and MURUGAPPAN as applied to claim 6 above, and further in view of Ohba (U.S. PG-PUB 2004/0066384, 'OHBA'). Regarding claim 9, CIER-MOORE-MURUGAPPAN disclose the information processing device according to claim 6; however, CIER-MOORE-MURUGAPPAN do not explicitly disclose that in the generating process, the … processor(s) generate, as the image, an image in which, among the … object(s), an object existing at a position separated relatively far away from a reference point is included as a stereoscopically invisible object, which OHBA discloses (OHBA; FIG. 30; ¶ 0148-149; “A Z-culling condition to be determined by the consolidation unit 48 is that the minimum Z-values of the back objects 280 and 282 shall be higher than Z-value of the Z-buffer 290 [which] is the maximum Z-value of the front object in the corresponding block. … when the above culling condition is satisfied, the back objects 280 and 282 are hidden behind the front object [‘objects existing at a position separated relatively far away from a reference point’] and invisible [‘included as a stereoscopically invisible object’]. … the minimum Z-value of the first back object 280 is higher than Z-value of the first block 270 of the Z-buffer 290, which satisfies Z-culling condition, therefore the first back object 280 is hidden behind the front object within the first block 270 and invisible. … the second back object 282 satisfies Z-culling condition and is hidden behind the front object within the second block 272 and invisible. On the other hand, since Z-value of the third block 276 takes the maximum value Zmax, even if there is a back object within this block, the Z-culling condition described above is not satisfied. This means that since there is no front object within the third block 276, the back object within this block can be seen from the viewpoint. The consolidation unit 48 does culling of the first back object 280 and second back object 282, which satisfy Z-culling condition, to omit rendering processing of these objects.”). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the information processing device according to claim 6 of CIER-MOORE-MURUGAPPAN to include the generating, as the image, an image in which, among the … object(s), an object existing at a position separated relatively far away from a reference point is included as a stereoscopically invisible object of OHBA. The motivation for this modification is to compare magnitudes of Z-values by block unit, such that the case not requiring rendering of the back object can be efficiently judged, so unnecessary rendering can be omitted and the speed of rendering processing can be increased (OHBA; ¶ 0149). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M COFINO whose telephone number is (303) 297-4268. The examiner can normally be reached Monday-Friday 10A-4P MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN M COFINO/Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Apr 07, 2023
Application Filed
Sep 03, 2025
Non-Final Rejection — §103
Dec 15, 2025
Response Filed
Feb 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597201
INTERACTIVE METHOD AND SYSTEM FOR DISPLAYING MEASUREMENTS OF OBJECTS AND SURFACES USING CO-REGISTERED IMAGES AND 3D POINTS
2y 5m to grant Granted Apr 07, 2026
Patent 12597202
GEOLOGICALLY MEANINGFUL SUBSURFACE MODEL GENERATION BASED ON A TEXT DESCRIPTION
2y 5m to grant Granted Apr 07, 2026
Patent 12536207
METHOD AND APPARATUS FOR RETRIEVING THREE-DIMENSIONAL (3D) MAP
2y 5m to grant Granted Jan 27, 2026
Patent 12511829
MAP GENERATION APPARATUS, MAP GENERATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING PROGRAM
2y 5m to grant Granted Dec 30, 2025
Patent 12505605
SOLVING LOW EFFICIENCY OF MOVING ADJUSTMENT CAUSED BY CONTROLLING MOVEMENT OF IMAGE USING MODEL PARAMETERS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
94%
With Interview (+32.2%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 210 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month