Prosecution Insights
Last updated: April 19, 2026
Application No. 18/845,016

METHOD AND APPARATUS FOR MONITORING AN INDUSTRIAL PLANT

Final Rejection §103
Filed
Sep 09, 2024
Examiner
SMITH, STEPHEN R
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
BASF Corporation
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
82%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
306 granted / 433 resolved
+12.7% vs TC avg
Moderate +11% lift
Without
With
+11.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
13 currently pending
Career history
446
Total Applications
across all art units

Statute-Specific Performance

§101
4.4%
-35.6% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
4.8%
-35.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 433 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to amended claim 1 have been fully considered but are moot because the arguments do not directly apply to the new combination of references being used in the current rejection. The amendment changes the scope of the claim because the addition of the limitation of determining dynamic recording properties to claim 1 affects the step of “outputting (S4) the video stream (VS') together with the determined object properties (XYZ) and recording properties.” Therefore, the new grounds of rejection presented in this Office action are necessitated by the amendment and therefore this action is appropriately made final. The rejection of claim 20 under 35 U.S.C. 101 is withdrawn in view of the amendment. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-8, 11-16 and 20-21 rejected under 35 U.S.C. 103 as being unpatentable over US 20160127712 A1 to Alfredsson et al. (hereinafter “Alfredsson”) (cited in the IDS filed 9/9/2024) in view of US 11636610 B2 to Lasenby et al. (“Lasenby”). Consider claim 1, Alfredsson discloses a method for monitoring an industrial plant comprising plant objects (Par. [0001]: “transmitting video to a remote user from an industrial site”; Par. [0071]: object identifiers), the method comprising the following steps: receiving (S1) a video stream (VS) of the plant generated with the aid of a video camera device, wherein the video stream (VS) comprises a temporal sequence of two-dimensional image data (Par. [0016]-[0019]: “obtaining a three-dimensional model of the location, capturing video images of the location using a camera, [. . .] and transferring the three-dimensional model and orientation to the device of the remote user together with a video stream comprising the captured video images”); assigning (S2) two-dimensional image components (PT) of the video stream (VS) to plant objects (A′, B′, C′, D′) with the aid of a three-dimensional (3D) model (CAD) of the plant (Par. [0070]-[0071]: “the control unit 38 analyses the captured images and investigates if it recognizes them with regard to a pre-existing three-dimensional model of the location and objects at this location [. . .] the real world objects, may be provided with object identifiers, such as NFC tags or bar codes. If these are read it is possible to obtain information about what types of objects they are. The type may be identified through the camera 34 detecting a visual object identifier, like a bar code”), in which respective object properties (XYZ) are allocated to the plant objects (A′, B′, C′, D′); determining (S3) a first recording property of the image data depending on the image components (PT) assigned to one or more plant objects (C′) and the respective object property (XYZ), wherein the recording property comprises a position, an orientation comprise the camera orientation. The camera data may also comprise the position of the camera”); and outputting (S4) the video stream (VS′) together with the determined object properties (XYZ) and recording properties (Par. [0026]-[0027] and [0077]: as discussed above). Alfredsson fails to explicitly disclose wherein the recording property comprises a zoom factor of the video camera device which captures the video stream; or determine a dynamic recording property based on a temporal sequence of two-dimensional image components of the video stream (VS); and outputting (S4) the video stream (VS′) together with the determined object properties (XYZ) and recording properties. In analogous art Lasenby discloses wherein the recording property comprises a zoom factor of the video camera device which captures the video stream; and determine a dynamic recording property based on a temporal sequence of two-dimensional image components of the video stream (VS) (Col. 9 Ln. 51 - Col. 10 Ln. 10 and Fig. 7: “FIG. 7 is a flow chart of example camera focus parameter estimation method 700 … In block 725, the estimator 180 estimates values for focal lengths (expressed in x and y directions) using the pair of frames indicative of some camera rotation in the x-y plane … In block 730, the estimator 180 determines if some zooming has occurred for the frames that may be used for three-dimensional reconstruction”). The combination of Alfredsson-Lasenby discloses: outputting (S4) the video stream (VS′) together with the determined object properties (XYZ) and recording properties (Alfredsson: Par. [0027] and [0077]: “transfer the three-dimensional model and orientation to the device of the remote user together with a video stream from the camera comprising the captured video images”; also Lasenby: Col. 10 Ln. 50-55: “The by-product information of relative camera rotations and reconstructed 3D world points may be used then, in block 830, as an input to an event based browser system”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the teachings of Alfredsson in view of the above teachings of Lasenby because apparent camera position may be affected by camera zoom; that is, the camera parameters may make a camera located away from a real world object appear much closer than the camera actually is (Lasenby: Col. 7 Ln. 21-29). Consider claim 2, Alfredsson discloses wherein the industrial plant is arranged on a plant site and the position of the video camera device on the plant site is ascertained (Par. [0015]-[0018]: “the method being performed by a video communication device being placed at a location of the industrial site [. . .] comprising: obtaining a three-dimensional model of the location, [0017] capturing video images of the location using a camera, determining a current orientation of the camera”; Par. [0088]: “Different frames of the video are processed to form a map of the environment. In the forming of the map also the position of the video communication device is determined. Using this map and a current video image, the current orientation or pose of the camera is calculated”). Consider claim 3, Alfredsson discloses wherein the video camera device has a video capture region which is smaller than the plant site (Par. [0098] and Fig. 10-11: projecting area PA). Consider claim 4, Alfredsson discloses wherein the video camera device is mounted at an unknown position during the step of receiving the video stream (VS) (Par. [0088]: “Different frames of the video are processed to form a map of the environment. In the forming of the map also the position of the video communication device is determined. Using this map and a current video image, the current orientation or pose of the camera is calculated”). Consider claim 5, Alfredsson discloses wherein the object properties (XYZ) comprise a location (Par. [0070]: “three-dimensional model of the location and objects at this location”), a size and/or dimension (Par. [0070]: “3DM of the location and the various objects” Note, therefore size/dimension is implicit), an alignment and/or orientation (Par. [0070]: “3DM of the location and the various objects”; also Par. [0072]: “the position in relation to the layout and various objects at the location” Note, therefore alignment/orientation is implicit), a list of constituents Alfredsson: Par. [0101]: object associated constituents/properties, e.g., “temperature in a tank or the voltage of a transformer”), material properties and/or surface properties of the plant object (A′, B′, C′, D′) (Alfredsson: Par. [0071]: “The type may be identified through the camera 34 detecting a visual object identifier, like a bar code”). Consider claim 6, Alfredsson discloses the method further comprising: creating (S11) a database which assigns further functional properties and/or context data to the plant objects (A′, B′, C′, D′) (Par. [0050]: “database 20 where data relating to control and protection of the process is stored [. . .] It may also provide face plates of process control objects, which face places may comprise process control data from the database 20 regarding the process control object”; Par. [0071]: “Such a code may be used to fetch data associated with the object for instance from a database in the process control system”; Par. [0100-[0102]: “He may also select an object in the model and the selection may be transferred to the control unit 38. The control unit 38 may then fetch data about the object from a database 20. It may for instance fetch a face plate with current data of the process control object.”). Consider claim 7, Alfredsson discloses wherein the functional properties and/or context data are updated (Par. [0100]-[0102]: “The remote user 52 may now want to obtain some more data about the process control objects that he sees in the video stream VS. He may for instance desire to obtain data of the temperature in a tank or the voltage of a transformer. In order to do this he may select an object in the video, or in the previously obtained model of the location. He may for instance detect an object identifier in the video and send the object identifier to the video communication device. He may also select an object in the model and the selection may be transferred to the control unit 38. The control unit 38 may then fetch data about the object from a database 20.” Note, therefore it is implicit that the object associated properties are updated, e.g., temperature/voltage). Consider claim 8, Alfredsson discloses wherein the assigning (S2) further takes place depending on functional properties of the plant objects (A′, B′, C′, D′) (Par. [0071]: “The process control objects, i.e. the real world objects, may be provided with object identifiers, such as NFC tags or bar codes. If these are read it is possible to obtain information about what types of objects they are. The type may be identified through the camera 34 detecting a visual object identifier, like a bar code [. . .] Such a code may be used to fetch data associated with the object for instance from a database in the process control system”). Consider claim 11, Alfredsson disclose wherein the camera movement and/or zoom speed are/is ascertained in real time (Par. [0076]-[0077]: “The control unit 38 also continuously determines the camera orientation, step 68, for instance based on the line of sight of a viewfinder of the camera 34. Thereby the control unit 38 determines a current orientation of the video communication device 32 … the camera data may comprise the camera orientation. The camera data may also comprise the position of the camera”). Consider claim 12, modified Alfredsson discloses the method according to claim 1, wherein the video stream (VS) does not comprise information data about the first recording property (Alfredsson: Par. [0077]: “The three-dimensional model 3DM may more particularly be transmitted together with camera data CD in or beside the video stream VS”), and the first recording property is ascertained exclusively by way of a comparison of the assigned two-dimensional image components (PT) and the data of the 3D model (CAD) (Par. [0070]: “After the area has been scanned, the control unit 38 analyses the captured images and investigates if it recognises them with regard to a pre-existing three-dimensional model of the location and objects at this location [. . .] These algorithms can be used to locate the features of a current frame or video image in the map of the real world and based on this the orientation of the camera 34 may be determined.”; Par. [0072]: “determining the position of the video communication device 32 at the location, step 63, i.e. the position in relation to the layout and various objects at the location. This may also involve adding the video communication device to the three-dimensional model 3DM of the location”). Consider claim 13, Alfredsson discloses wherein the 3D model data comprise a position of the video camera device (Par. [0072]: “determining the position of the video communication device 32 at the location, step 63, i.e. the position in relation to the layout and various objects at the location. This may also involve adding the video communication device to the three-dimensional model 3DM of the location.”). Consider claim 14, modified Alfredsson discloses the method according to claim 1, wherein a zoom factor is ascertained depending on a plurality of temporally successive two-dimensional image data of the video camera device (Lasenby: Col. 9 Ln. 51 - Col. 10 Ln. 10 and Fig. 7: “FIG. 7 is a flow chart of example camera focus parameter estimation method 700… In block 730, the estimator 180 determines if some zooming has occurred for the frames that may be used for three-dimensional reconstruction”). The motivation to combine the references is the same as regarding claim 1. Consider claim 15, Alfredsson discloses: representing (S5) the video stream (VS) together with the object properties (XYZ) and/or recording properties with the aid of a display device (Par. [0057]: “video images are then presented for the remote user 52 via the display of his or her computer 51. This is shown in FIG. 6. FIG. 5 schematically indicates the transmission of a video stream VS, a three-dimensional model 3DM and camera data CD from the video communication device 32 to the computer 51”; Par. [0079]: “The view contains the live video stream from the camera, of which the image VI is presented. Furthermore, contextual information is provided through an overview image OI of the location, which overview image OI is obtained through visualizing the model 3DM of the location with the video communication device VCD and its orientation”; also Par. [0100]-[0102]). Consider claim 16, Alfredsson discloses: creating (S12) the three-dimensional (3D) model of the plant, wherein object properties (XYZ) are allocated to the plant objects (Par. [0070]: “if there was no pre-existing model, step 58, a new three-dimensional model 3DM of the location and the various objects in it is created by the control unit 38”; Par. [0071]: identifying objects and fetching data/properties associated with the object for instance from a database ; also Par. [0050]: object data server 21 for storing associated object data/properties). Consider claim 20, the computer program product is rejected based on the same rationale as the method of claim 1 and because Alfredsson further teaches machine-readable instructions which, when they are processed by one or more processor devices of a processing environment (CLD) (Par. [0053]: “In the program memory 39 there is provided software code which when being run by the processor 40 forms a control unit 38”), cause one or all method steps of the method according to claim 1 to be carried out. The motivation to combine the references is the same as regarding claim 1. Consider claim 21, the apparatus is rejected based on the same rationale as the method of claim 1 and because Alfredsson further teaches a system including a camera device (Par. [0052] and Fig. 2: camera 34), a storage device (Par. [0070]: “The pre-existing three-dimensional model may be provided in the video communication device 32. As an alternative it may be obtained or fetched from a server, such as server 21”), a processing device (Par. [0070]: control unit 38), and a display device (Par. [0057]: “display of his or her computer 51”), wherein the video camera device, the storage device, the processing device and the display device are communicatively coupled to one another (Par. [0049] and Fig. 1: “gateway 16 connected to this first data bus B1, which gateway 16 is connected to at least one wireless network WN”; Par. [0056] and Fig. 1: “a computer 51 of the remote user 52 may communicate with the video communication device 32 via the Internet.”). The motivation to combine the references is the same as regarding claim 1. Claim 9 rejected under 35 U.S.C. 103 as being unpatentable over Alfredsson in view of Lasenby, further in view of US 11836420 B1 to Levanti et al. (“Levanti”). Consider claim 9, modified Alfredsson discloses the method and three-dimensional model according to claim 1, but fails to explicitly disclose wherein the three-dimensional model is a computer-aided design model of the industrial plant. In analogous art, Levanti discloses ([Abstract]: “A 3D modeling service of a provider network may receive, from a client facility, different video streams from different cameras at the facility [. . .] The 3D modeling service may construct a 3D model of the facility based at least on the collection of image data and other data from one or more other sources (e.g., CAD files other floor plans)”). It would have been an obvious design choice to one with ordinary skill, in the art before the effective filing date of the invention, to modify the combined teachings of Alfredsson and Lasenby further in view of the above teachings of Levanti to fill in one or more coverage gaps of a 3D model using measurements from a CAD file (Levanti: Col. 5 Ln. 14-29). Claim17 rejected under 35 U.S.C. 103 as being unpatentable over Alfredsson in view of Lasenby, further in view of US 20160350921 A1 to Bataller et al. (“Bataller”) (cited in the IDS filed 9/9/2024). Consider claim 17, modified Alfredsson discloses the method according to claim 1, and transmitting the video stream (VS) to a processing platform; transmitting the 3D model (CAD) to the processing platform; and transmitting the determined first recording property from the processing platform to a display device (Alfredsson: Par. [0057]: “video images are then presented for the remote user 52 via the display of his or her computer 51. This is shown in FIG. 6. FIG. 5 schematically indicates the transmission of a video stream VS, a three-dimensional model 3DM and camera data CD from the video communication device 32 to the computer 51 of the remote user 52”). Modified Alfredsson fails to disclose: wherein the processing platform is designed like a cloud service (CLD) and is configured to carry out the steps of assigning (S2) two-dimensional image components and determining (S3) the respective recording property in a computer-implemented manner in a secure processing environment (CLD). In analogous art, Bataller discloses wherein the processing platform is designed like a cloud service (CLD) (Par. [0051]: “video analytics platform 310 may include [. . .] virtual machines (VMs) provided in a cloud computing environment”) and is configured to carry out the steps of assigning (S2) two-dimensional image components and determining (S3) the respective recording property in a computer-implemented manner in a secure processing environment (CLD) (Par. [0044]-[0045]: “the video analytics platform 210 may access one or more external databases 260 to determine characteristics of the one or more objects identified by the video analyzer component”; Par. [0071]: “FIG. 5 is a flowchart of an example process 500 for determining intrinsic and extrinsic video camera parameters.”; Par. [0077]-[0079]: “At step 508, the system calculates a tilt and focal length [. . .] At step 510, the system calculates vertical and horizontal angles of view“). It would have been an obvious design choice to one with ordinary skill, in the art before the effective filing date of the invention, to modify the combined teachings of Alfredsson and Lasenby further in view of the above teachings of Bataller to ascertain recording properties based on image analysis to possibly improve flexibility and/or accuracy of calibration (Bataller, Par. [0024]), and to implement the video analytics platform as a cloud service for various benefits such as increased flexibility and scalability. Claim 18 rejected under 35 U.S.C. 103 as being unpatentable over Alfredsson in view of Lasenby, further in view of US 20100315416 A1 to Pretlove et al. (“Pretlove”). Consider claim 18, modified Alfredsson discloses the method according to claim 1, and Alfredsson discloses the steps of: receiving a video stream (VS), assigning (S2) two-dimensional image components (PT), determining (S3) a respective recording property and outputting (S4) the video stream using a respective 3D model (CAD) (as examined regarding claim 1). Modified Alfredsson fails to explicitly disclose applying the steps to each of a plurality of video streams (VS) received from different video camera devices. In analogous art, Pretlove discloses: receiving a plurality of video streams (VS) from different video camera devices; wherein the steps of assigning (S2) two-dimensional image components (PT), It would have been an obvious design choice to one with ordinary skill, in the art before the effective filing date of the invention, to modify the teachings of modified Alfredsson further in view of the multiple cameras of Pretlove to enable the remote operator to receive images of video and a 3D model from a selected viewpoint (Pretlove: Par. [0017]-[0018]). Claim 19 rejected under 35 U.S.C. 103 as being unpatentable over Alfredsson in view of Lasenby, Pretlove and Bataller. Consider claim 19, Alfredsson in view of Lasenby and Pretlove discloses the method according to claim 18, but fails to explicitly disclose optionally forwarding one of the plurality of video streams to a processing platform, wherein the processing platform is a cloud service (CLD) and is configured to carry out the steps of assigning (S2) two-dimensional image components and determining (S3) the respective recording property in a computer-implemented manner in a secure processing environment. In analogous art, Bataller discloses: optionally forwarding one of the plurality of video streams to a processing platform, wherein the processing platform is a cloud service (CLD) and is configured to carry out the steps of assigning (S2) two-dimensional image components and determining (S3) the respective recording property in a computer-implemented manner in a secure processing environment (Par. [0051]: “video analytics platform 310 may include [. . .] virtual machines (VMs) provided in a cloud computing environment”; Par. [0044]-[0045]: “the video analytics platform 210 may access one or more external databases 260 to determine characteristics of the one or more objects identified by the video analyzer component”; Par. [0071]: “FIG. 5 is a flowchart of an example process 500 for determining intrinsic and extrinsic video camera parameters.”; Par. [0077]-[0079]: “At step 508, the system calculates a tilt and focal length [. . .] At step 510, the system calculates vertical and horizontal angles of view“). It would have been an obvious design choice to one with ordinary skill, in the art before the effective filing date of the invention, to modify the teachings of modified Alfredsson further in view of the above teachings of Bataller to ascertain recording properties based on image analysis to possibly improve flexibility and/or accuracy of calibration (Bataller, Par. [0024]), and to implement the video analytics platform as a cloud service for various benefits such as increased flexibility and scalability. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHEN R SMITH whose telephone number is (571)270-1318. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Q Tran can be reached at (571) 272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. STEPHEN R. SMITH Examiner Art Unit 2484 /THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Sep 09, 2024
Application Filed
Aug 18, 2025
Non-Final Rejection — §103
Nov 07, 2025
Response Filed
Mar 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598272
PT/PT-Z CAMERA COMMAND, CONTROL & VISUALIZATION SYSTEM AND METHOD UTILIZING ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Apr 07, 2026
Patent 12598280
VIDEO DATA PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, COMPUTER READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Apr 07, 2026
Patent 12597256
PARKING LOT MONITORING AND PERMITTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12587623
IMAGE SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12567443
METHOD FOR READING AND WRITING FRAME IMAGES HAVING VARIABLE FRAME RATES AND SYSTEM THEREFOR
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
82%
With Interview (+11.2%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 433 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month