Prosecution Insights
Last updated: April 18, 2026
Application No. 17/910,237

METHOD AND APPARATUS FOR MODELLING A SCENE

Final Rejection §103
Filed
Sep 08, 2022
Examiner
MAZUMDER, SAPTARSHI
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Interdigital Ce Patent Holdings
OA Round
6 (Final)
64%
Grant Probability
Moderate
7-8
OA Rounds
2y 8m
To Grant
76%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
241 granted / 375 resolved
+2.3% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
402
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 7, 9, 14, 17-18, 23, 25 are rejected under 35 U.S.C. 103 as being unpatentable over Huo et al. (US Pat. Pub. No. 20210365681 “Huo”) in view of Bertolami et al. (US Patent No. 8797321 “Bertolami”) and Wang et al. (US Patent Publication 20210034319, “Wang”). Regarding claim 17 Huo teaches An apparatus comprising a processor (Fig. 2 element 24) configured to: obtain a model of a real scene spatially locating at least one object in the real scene, wherein the model is based on first information describing the real scene, wherein the model includes a lighting model, (“[0021]….The IoT devices 40 are arranged within a scene 60. The scene 60 comprises a real-world environment, such as a room [0042] The method 100 continues with a step of generating a SLAM map of the scene based on the plurality of image frames (block 120)…… the processor 24 is configured to execute the SLAM and IoT localization program 34 to generate a three-dimensional model or map representation of the scene 60, referred to herein as the SLAM map 36”. [0021] “….Some exemplary IoT devices 40 include, but are not limited to, light bulbs, lights switches,….. a network router” As a light bulb is an IoT object, and the IoT object is modeled in the scene, therefore a lighting model is modeled in the scene. “[0022]…… The camera 22 is configured to generate image frames of the scene 60, each of which comprises a two-dimensional array of pixels. Each pixel has corresponding photometric information (intensity, color, and/or brightness)”) Even though Huo teaches the lighting model as shown above but is silent about wherein the lighting model comprising at least a first light direction and a first light intensity of a light source associated with the at least one object; Bertolami teaches lighting model comprises at least a first light direction and a first light intensity of a light source associated with at least one object (Col 1 lines 50-55 “ Determining or estimating a light source includes measuring or estimating one or more of the following light characteristics: the location of the light source, the direction of the light, the color of the light, the shape of the light, the intensity of the light, and the coherence or diffusion properties of the light”); Huo and Bertolami are analogous art as both of them are related to processing of sensor data. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Huo by having lighting model that comprises at least a first light direction and a first light intensity of a light source associated with the at least one object as taught by Bertolami. The motivation for the above is to reconstruct lighting in the virtual environment. Huo modified by Bertolami teaches receive an unsolicited message, from the at least one object, wherein the unsolicited message comprises second information indicating a change in status of the at least one object that changes a lighting of the real scene around the at least one object (Huo receives the message from IOT devices, the message is an unsolicited message. IOT device are sending the status to the processor without any request from the processor. Therefore the message Huo receives is an unsolicited message. “[0069] In at least some embodiments, the processor 24 of the AR device 20 is configured to operate the Wi-Fi module 30 to receive at least one status parameter from one or more of the IoT devices 40. As used herein, the phrase “status parameter” refers to any parameter relating to the status, control state, operating state, or other state of device. Exemplary status parameters may include a status of an IoT device 40 (e.g., “on,” “off,” “low battery,” “filter needs replaced,” etc.), sensor data measured by the respective sensor(s) 52 of an IoT device 40 (e.g., temperature, moisture, motion, occupancy, etc.), or a control status of the respective actuator(s) 54 (“cooling,” “heating,” “fan speed: high,” etc.)”. Bertolami Col 8 lines 30-32 “he augmented reality display would dynamically update the rendering of the virtual object based on the orientation of the light sensor and the light source”); wherein the second information further indicates at least one of a second light direction or a second light intensity of the light source associated with the at least one object (Bertolami Col 8 lines 28-32 “Where the light sensor and a light source move independently (e.g., a digital camera and a flashlight), the augmented reality display would dynamically update the rendering of the virtual object based on the orientation of the light sensor and the light source”); update the lighting model based on at least one of the second light direction or the second light intensity of the light source associated with the at least one object and update the model based on the updated lighting model (Huo [0069] “….. The processor 24 is configured to render the graphical elements associated with the IoT device 40 from which the status parameters are received depending on the status parameter and, in particular, render the graphical elements to indicate the status parameter to the user. For example, an icon associated with a particular IoT device might be rendered green to indicate the status parameter “on”; Bertolami Col 8 lines 30-35 “the augmented reality display would dynamically update the rendering of the virtual object based on the orientation of the light sensor and the light source. With a sudden loss of all light in the physical environment, the virtual objects would also go dark, instead of remaining bright in the augmented reality display”). Huo modified by Bertolami doesn’t teach, sending at least one parameter of the updated model from the processing device to a plurality of renderer devices for use in respective augmented reality applications. However, Wang teaches, sending at least one parameter of the updated model from the processing device to a plurality of renderer devices for use in respective augmented reality applications. (“[0006] The method further receives, on the first device, input providing a change to the 3D object and, responsive to the input, provides data corresponding to the change. Based on this data, the second view of the 3D object on the second device is updated to maintain consistency between the 3D object in the first view and the second view. For example, if a first user changes the color of a 3D model of a table to white on the first device, the first device sends data corresponding to this change to the second device, which updates the second view to also change the color of the 3D model depicted on the second device to white.” Paragraph [0045] indicates multiple devices provides AR/VR/MR displays and one parameter ( for example color) of an undated model are transferred to other devices for AR/VR/MR application. “[0045] FIG. 4 illustrates the change made to the 3D model 125 displayed in the second view 215 on the second device of FIG. 2. Leg 305 of the depicted 3D model 125 is extended to correspond to the extension of the leg 305 of the 3D model 125 in first view 115. Any changes made to the depicted 3D model 125 in the first view 115 are depicted in the 3D model 125 in the second view 215. Conversely any changes made to the 3D model 125 in the second view 215 are depicted in the 3D model 125 in the first view 115. In this way, two or more devices such as devices 10, 20 are able to simultaneously view or edit the same 3D model 125 in the same or different settings/viewing modes (e.g., monoscopically, stereoscopically, in VR, in MR, etc.). Huo modified by Bertolami and Wang are analogous as they are from the filed of virtual image generation. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Huo modified by Bertolami to have included sending at least one parameter of the updated model from the processing device to a plurality of renderer devices for use in respective augmented reality applications as taught by Wang. The motivation for the above modification is to send a minimum information (only changed parameter) that requires a lower bandwidth. Claim 1 is directed to a method and its steps are similar in scope and function performed by the apparatus claim 17 and therefore claim 1 is also rejected with the same rationale as specified in the rejection of claim 1. Claim 14 is directed to A non-transitory computer-readable storage medium (Huo [0039] “……controller or processor executing programmed instructions (e.g., the SLAM and IoT localization program 34 and/or firmware of the microcontroller 44) stored in non-transitory computer readable storage media operatively connected to the controller or processor to manipulate data or to operate one or more components in the vehicle access system 10”) and its elements are similar in scope and function performed by the apparatus claim 17 and therefore claim 14 is also rejected with the same rationale as specified in the rejection of claim 1. Regarding claims 3 and 18 Huo modified by Bertolami and Wang teaches wherein the first information comprises at least one image of the real scene (Huo [0042] “The method 100 continues with a step of generating a SLAM map of the scene based on the plurality of image frames (block 120)”). Regarding claims 7 and 23 Huo modified by Bertolami and Wang teaches wherein the light source is associated with a set of parameters including at least one of a light type, a position, a shape, a dimension, a status, or a color (Huo “[0044]…Each of the points, lines, or other geometric shapes of the SLAM map 36 may be associated with photometric information (e.g., one or more intensity, color, and/or brightness values). [0069] In at least some embodiments, the processor 24 of the AR device 20 is configured to operate the Wi-Fi module 30 to receive at least one status parameter from one or more of the IoT devices 40”). Regarding claims 9 and 25 Huo modified by Bertolami and Wang teaches, wherein the light source represents a smart bulb (Huo [0021] “….Some exemplary IoT devices 40 include, but are not limited to, light bulbs, lights switches, a programmable thermostat, a fan, a humidifier, a television, a printer, a watering can, speakers, environmental sensors, and/or a network router”). Claim(s) 4-5, 16 and 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over Huo modified by Bertolami and Wang and further in view of Bleyer et al. (US Pat. Pub. No. 20210027538 “Bleyer”). Regarding claims 4 and 19 Huo modified by Bertolami and Wang is silent about wherein the first information comprises a model element representing the at least one object; Bleyer teaches first information comprises a model element representing at least one object (Bleyer “[0050] FIG. 2 also shows how IOT device 205 is able to collect any type or amount of sensor data 230. Sensor data 230 can include measurement data 230A (e.g., data collected from sensor(s) 215 including any type of sensed data such as environmental data describing/representing the environment or even operational data of a device), image data 230B (e.g., image data generated or captured by camera(s) 220), and IMU data 230C (e.g., movement data generated by IMU 225 describing (i.e. digitally representing) any movements of the IOT device within its environment). Any amount or type of collected or sensed data may be included in sensor data 230”. Here “movement data” is claimed model element); Bleyer and Huo modified by Bertolami and Wang are analogous art as both of them are related to processing of sensor data. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Huo modified by Bertolami and Wang by having first information that comprises a model element representing at least one object as taught by Bleyer. The motivation for the above is to know the trajectory of the IOT device so that this can be used during rendering. Huo modified by Bertolami, Wang and Bleyer teaches, wherein the model element includes at least one of a pre-determined shape, texture, or reflectance parameter of the at least one object. (Bleyer “[0050] FIG. 2 also shows how IOT device 205 is able to collect any type or amount of sensor data 230. Sensor data 230 can include measurement data 230A (e.g., data collected from sensor(s) 215 including any type of sensed data such as environmental data describing/representing the environment or even operational data of a device), image data 230B (e.g., image data generated or captured by camera(s) 220), and IMU data 230C (e.g., movement data generated by IMU 225 describing (i.e. digitally representing) any movements of the IOT device within its environment). Any amount or type of collected or sensed data may be included in sensor data 230”. Here “movement data” is claimed model element and Movement data provides a shape of the objector IOT device. ); Additionally Bertolami teaches, reflectance parameter of the at least one object, See Bertolami Col 1 lines 50-55 teaches “ Determining or estimating a light source includes measuring or estimating one or more of the following light characteristics: the location of the light source, the direction of the light, the color of the light, the shape of the light, the intensity of the light, and the coherence or diffusion properties of the light”); Regarding claims 5 and 20 Huo modified by Bertolami, Wangi and Bleyer teaches wherein the model element is received from the at least one object (Bleyer “[0051] Architecture 200 shows how IOT device 205 is transmitting and/or receiving data 235 across a network to a cloud 240, and in particular to a server 240A operating or executing a mixed-reality (MR) service 240B”). Regarding claims 16 and 21 Huo modified by Bertolami, Wang and Bleyer teaches wherein the model element is received from a network element storing a database of model elements (Bleyer [0054] “FIG. 2 shows how MR service 240B is able to store the data 235 in storage 25. [0055] In some cases, MR service 240B is able to transmit some or all of the received data 235 to another device, as shown by data 260”). Claim(s) 8 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Huo modified by Bertolami and Wang and further in view of Barnett et al. (US Patent No. 10110272 “Barnett”). Regarding claims 8 and 24 Huo modified by Bertolami and Wang is silent about wherein the light source represents a window, and wherein at least one of the first information or the second information comprises at least one of a shutter status, a day of year, a time of day or weather conditions. Barnett teaches , the light source represents a window, and wherein at least one of the first information or the second information comprises at least one of a shutter status, a day of year, a time of day or weather conditions. (Barnett Col 31 lines 11-30 “In some instances, the one or more IoT-capable sensors 125 might include, without limitation, at least one of an ambient temperature sensor, a flame detector…… a weather sensor, or a seismic sensor, and/or the like. Col 23 lines 7-lines 35 In some embodiments, the user devices 315, some of which might include one or more IoT-capable sensors 310, might include, without limitation, one or more display devices 315a…… one or more automated window locking systems 315i, one or more automated window opening or closing systems 315j, one or more smart windows 315k”); Barnett and Huo modified by Bertolami and Wang are analogous art as both of them are related to processing of sensor data. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Huo modified by Bertolami and Wang by having the light source represents a window, and wherein at least one of the first information or the second information comprises at least one of a shutter status, a day of year, a time of day or weather conditions. as taught by Barnett. The motivation for the above is to collect useful information from a smart object for recreating a scene. Claim(s) 10, 12 and 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over Huo modified by Bertolami and Wang and further in view of LaMontagne et al. (US Patent Pub. No. 20180047067 “LaMontagne”). Regarding claims 10 and 26 Huo modified by Bertolami and Wang is silent about obtaining a request for model information at a position in the real scene. LaMontagne teaches obtaining a request for model information at a position in the scene (LaMontagne “[0022] As illustrated, publisher or developer can, using publisher interface 102, request the smart object placement system 108 to place smart object 101 in a three dimensional environment, as described further herein. smart object placement system 108 can request the publisher's input to select a smart object (generated by smart object generation system 106) and save the coordinates, including game identification data, smart object identification data, smart object category/type data, publisher/developer identification data, and location/scene identification data. [0034]…… At 504, the system can receive a request from a user (e.g., when the user selects tab 302) to display BRDI associated with at least one smart object out of the set of smart objects of the 3D environment. ”); Huo modified by Bertolami and Wang and LaMontagne are analogous art as both of them are related to processing of sensor data. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Huo modified by Bertolami and Wang by obtaining a request for model information at a position in the scene similar to obtaining a request for model information at a position in the real scene as taught by LaMontagne. The motivation for the above is to provide user control by providing an option to choose a specific object in a scene for receiving model information. Regarding claims 12 and 27 Huo modified by Bertolami, Wang and LaMontagne teaches wherein the request is obtained from at least one of a rendering device or a user interface of the processing device (LaMontagne “[0022] As illustrated, publisher or developer can, using publisher interface 102, request the smart object placement system 108 to place smart object 101 in a three dimensional environment, as described further herein”). Response to Arguments Applicant’s arguments, see remarks filed 01/05/2026 with respect to rejection of independent claims 1, 14 and 17 under 35 USC 103 has been fully considered and are persuasive. The rejection has been withdrawn. However upon further considerations, a new ground of rejection has been made under 35 U.S.C. 103 as being unpatentable over Huo et al. (US Pat. Pub. No. 20210365681 “Huo”) in view of Bertolami et al. (US Patent No. 8797321 “Bertolami”) and Wang et al. (US Patent Publication 20210034319, “Wang”). Applicant argues, see remarks Page 7, “All rejections are moot by virtue of amendment to independent claims 1, 14, and 17. As the Office reconsiders the patentability of the claims in light of the amendments, to facilitate prosecution, Applicant wishes to point out that the system architecture of Huo is AR device-centric - the AR device (e.g., 20) is actively surveying its environment to generate a map and localize objects. One of ordinary skill in the art recognizes this as a "pull" or request-response model where the AR device polls objects for their status as needed. So, assuming for the sake of argument, even if Huo discloses receiving a status from an loT device, the AR device solicited the status.” Examiner replies, Huo’s processor receives message from the IOT device. As Huo receives the message from IOT devices, the message is an unsolicited message. IOT device are sending the status to the processor without any request from the processor. Therefore the message Huo receives is an unsolicited message. See Huo, “[0069] In at least some embodiments, the processor 24 of the AR device 20 is configured to operate the Wi-Fi module 30 to receive at least one status parameter from one or more of the IoT devices 40. As used herein, the phrase “status parameter” refers to any parameter relating to the status, control state, operating state, or other state of device. Exemplary status parameters may include a status of an IoT device 40 (e.g., “on,” “off,” “low battery,” “filter needs replaced,” etc.), sensor data measured by the respective sensor(s) 52 of an IoT device 40 (e.g., temperature, moisture, motion, occupancy, etc.), or a control status of the respective actuator(s) 54 (“cooling,” “heating,” “fan speed: high,” etc.)”. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAPTARSHI MAZUMDER whose telephone number is (571)270-3454. The examiner can normally be reached 8 am-4 pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at (571)272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAPTARSHI MAZUMDER/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Sep 08, 2022
Application Filed
Mar 08, 2024
Non-Final Rejection — §103
Jun 12, 2024
Response Filed
Sep 16, 2024
Final Rejection — §103
Dec 16, 2024
Request for Continued Examination
Dec 17, 2024
Response after Non-Final Action
Jan 07, 2025
Non-Final Rejection — §103
Apr 02, 2025
Interview Requested
Apr 11, 2025
Applicant Interview (Telephonic)
Apr 11, 2025
Examiner Interview Summary
Apr 11, 2025
Response Filed
May 16, 2025
Final Rejection — §103
Aug 19, 2025
Request for Continued Examination
Aug 28, 2025
Response after Non-Final Action
Oct 08, 2025
Non-Final Rejection — §103
Jan 05, 2026
Response Filed
Apr 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597211
GENERATING VARIANTS OF VIRTUAL OBJECTS BASED ON ADJUSTABLE EXTERNAL FACTORS
2y 5m to grant Granted Apr 07, 2026
Patent 12586316
METHOD FOR MIRRORING 3D OBJECTS TO LIGHT FIELD DISPLAYS
2y 5m to grant Granted Mar 24, 2026
Patent 12582488
USER INTERFACE FOR CONNECTING MODEL STRUCTURES AND ASSOCIATED SYSTEMS AND METHODS
2y 5m to grant Granted Mar 24, 2026
Patent 12579745
Curvature-Guided Inter-Patch 3D Inpainting for Dynamic Mesh Coding
2y 5m to grant Granted Mar 17, 2026
Patent 12567210
Multipath Artifact Avoidance in Mobile Dimensioning
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
64%
Grant Probability
76%
With Interview (+11.8%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month