Prosecution Insights
Last updated: April 19, 2026
Application No. 18/747,481

Monitoring Device

Non-Final OA §103
Filed
Jun 19, 2024
Examiner
YANG, YI
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Leuze Electronic GmbH + Co. Kg
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
88%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
295 granted / 415 resolved
+9.1% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
39 currently pending
Career history
454
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
76.0%
+36.0% vs TC avg
§102
2.7%
-37.3% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Claim Objections Claim 1-15 are objected to because of the following informalities: Claim 1-15 include numbers such as “monitoring device (1)”, please remove these numbers. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-2, 6, 8-9 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Vu U.S. Patent Application 20220379474 in view of Flexman U.S. Patent Application 20190371012. Regarding claim 1, Vu discloses a monitoring device (1) with a safety sensor (4) (cameras 102), which is designed for performing a protection field monitoring and can be connected to a configuration unit (8) (control system 112) designed for configuring protection fields (7) (workcell 100) for the protection field monitoring performed with the safety sensor (4) (paragraph [0050]: FIG. 1, which illustrates a representative 3D workcell 100 monitored by a plurality of cameras representatively indicated at 1021 and 1022; paragraph [0054]: FIG. 3... The control system 112 includes a central processing unit (CPU) 305, system memory 310; paragraph [0056]: The system memory 310 contains a series of frame buffers 335, i.e., partitions that store, in digital form (e.g., as pixels or voxels, or as depth maps), images obtained by the cameras 102; the data may actually arrive via I/O ports 327 and/or transceiver 325), characterized in that the configuration unit (8) has a graphical drawing interface (22) (user interface on display 320), in that a graphical information of a hazard region (2) within which a protection field (7) is to be configured is displayed in the graphical drawing interface (22), and in that elements of the graphical information are used for configuring the protection field (7) (paragraph [0061]: user interface, shown in display 320 and displaying the scene observed by the cameras, may allow a user to designate certain parts of the image as key elements of the machinery under control. In some embodiments, the interface provides an interactive 3D display that shows the coverage of all cameras to aid in configuration; paragraph [0167]: FIG. 5... The volumes surrounding the moving objects (determined by their position and estimated trajectories) that are deemed unsafe for volumes of nearby objects to overlap are continuously checked for collisions or movements that bring them on a potential collision course). Vu discloses all the features with respect to claim 1 as outlined above. However, Vu fails to disclose hazard region within which a protection field is to be configured is superimposed in the graphical drawing interface. Flexman discloses hazard region within which a protection field is to be configured is superimposed in the graphical drawing interface (paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106. The graphic processing module 110 is configured to generate a contextual overlay 156 which represents the areas needed for clearing during an image acquisition process so that the c-arm 154 may freely move without hitting any objects. The user when viewing the contextual overlay 156 on the augmented reality display device can easily determine if an object is within the region that requires clearance and may then remove the object from the region). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu’s to overlay clearance region as taught by Flexman, to provide safety assistance or guidance to the user. Regarding claim 2, Vu as modified by Flexman discloses the monitoring device (1) according to claim 1, characterized in that the graphical information is a CAD depiction or a photo (Vu’s paragraph [0024]: A workspace monitored by 2D or 3D cameras, with images collected at predetermined intervals and examined by a control system to detect hazardous conditions, enable dangerous machinery to operate proximate to, or in collaboration with, human operators; Flexman’s paragraph [0004]: The live image stream can be provided using the eye, cameras, smart phones, tablets, etc. This image stream is augmented by a display to the user). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu’s to overlay clearance region as taught by Flexman, to provide safety assistance or guidance to the user. Regarding claim 6, Vu as modified by Flexman discloses the monitoring device (1) according to claim 1, characterized in that elements of the graphical information are marked on the graphical drawing interface, wherein marked elements are adopted as protection field-contour elements, or in that protection field-contour elements are calculated from marked elements (Vu’s paragraph [0167]: FIG. 5... The volumes surrounding the moving objects (determined by their position and estimated trajectories) that are deemed unsafe for volumes of nearby objects to overlap are continuously checked for collisions or movements that bring them on a potential collision course; Flexman’s paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106. The graphic processing module 110 is configured to generate a contextual overlay 156 which represents the areas needed for clearing during an image acquisition process so that the c-arm 154 may freely move without hitting any objects. The user when viewing the contextual overlay 156 on the augmented reality display device can easily determine if an object is within the region that requires clearance and may then remove the object from the region). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu’s to overlay clearance region as taught by Flexman, to provide safety assistance or guidance to the user. Regarding claim 8, Vu as modified by Flexman discloses the monitoring device (1) according to claim 6, characterized in that contrast edges are marked in a graphical information in the form of a photo and serve for forming protection field contour suggestions (Vu’s paragraph [0024]: A workspace monitored by 2D or 3D cameras, with images collected at predetermined intervals and examined by a control system to detect hazardous conditions, enable dangerous machinery to operate proximate to, or in collaboration with, human operators; paragraph [0167]: FIG. 5... The volumes surrounding the moving objects (determined by their position and estimated trajectories) that are deemed unsafe for volumes of nearby objects to overlap are continuously checked for collisions or movements that bring them on a potential collision course; Flexman’s paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106. The graphic processing module 110 is configured to generate a contextual overlay 156 which represents the areas needed for clearing during an image acquisition process so that the c-arm 154 may freely move without hitting any objects. The user when viewing the contextual overlay 156 on the augmented reality display device can easily determine if an object is within the region that requires clearance and may then remove the object from the region). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu’s to overlay clearance region as taught by Flexman, to provide safety assistance or guidance to the user. Regarding claim 9, Vu as modified by Flexman discloses the monitoring device (1) according to claim 6, characterized in that polygonal chains and/or support points are calculated from marked elements, the polygonal chains and/or support points serving for calculation of protection field contour elements or constituting protection field contour elements (Vu’s paragraph [0167]: FIG. 5... The volumes surrounding the moving objects (determined by their position and estimated trajectories) that are deemed unsafe for volumes of nearby objects to overlap are continuously checked for collisions or movements that bring them on a potential collision course; Flexman’s paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106. The graphic processing module 110 is configured to generate a contextual overlay 156 which represents the areas needed for clearing during an image acquisition process so that the c-arm 154 may freely move without hitting any objects. The user when viewing the contextual overlay 156 on the augmented reality display device can easily determine if an object is within the region that requires clearance and may then remove the object from the region). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu’s to overlay clearance region as taught by Flexman, to provide safety assistance or guidance to the user. Regarding claim 14, Vu as modified by Flexman discloses the monitoring device (1) according to claim 1, characterized in that the safety sensor (4) is constituted by an area distance sensor or a camera sensor (Vu’s paragraph [0024]: A workspace monitored by 2D or 3D cameras, with images collected at predetermined intervals and examined by a control system to detect hazardous conditions, enable dangerous machinery to operate proximate to, or in collaboration with, human operators; paragraph [0050]: FIG. 1, which illustrates a representative 3D workcell 100 monitored by a plurality of cameras representatively indicated at 1021 and 1022). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu’s to overlay clearance region as taught by Flexman, to provide safety assistance or guidance to the user. Regarding claim 15, Vu as modified by Flexman discloses the monitoring device (1) according to claim 1, characterized in that the configuration unit (8) and the safety sensor (4) are connected by a bidirectional, failsafe data connection (9) (Vu’s paragraph [0056]: The system memory 310 contains a series of frame buffers 335, i.e., partitions that store, in digital form (e.g., as pixels or voxels, or as depth maps), images obtained by the cameras 102; the data may actually arrive via I/O ports 327 and/or transceiver 325; see fig. 3 I/O ports 327 are bidirectional). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu’s to overlay clearance region as taught by Flexman, to provide safety assistance or guidance to the user. Claim 3-4 are rejected under 35 U.S.C. 103 as being unpatentable over Vu U.S. Patent Application 20220379474 in view of Flexman U.S. Patent Application 20190371012, and further in view of Chu U.S. Patent Application 20170186235. Regarding claim 3, Vu as modified by Flexman discloses measures to minimize or mitigate crosstalk among sensors or cameras in factory-scale settings involving independently monitored workcells (Vu's paragraph [0121]). However, Vu as modified by Flexman fails to disclose characterized in that the graphical information is to-scale. Chu discloses characterized in that the graphical information is to-scale (paragraph [0006]: reference marker enables the augmented graphics to be constructed/superimposed in the actual image with the correct scale and/or orientation relative to the marker in the combined image(s) produced). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu and Flexman’s to display image with correct scale as taught by Chu, to offer user the capability to produce interactive, immersive and live-like media contents enriched with a variety of graphical content. Regarding claim 4, Vu as modified by Flexman and Chu discloses the monitoring device (1) according to claim 1, characterized in that for a distorted graphical information, markings with a scale are contained or can be input therein, on the basis of which a distortion correction of the graphical information takes place in the configuration unit (8) (Chu’s paragraph [0115]: the determination of the parameters associated with the reference markers in the acquired scene may include one or more of the scale, distortion or angular orientation adjustment which is then applied to the graphical content superimposed on the acquired scene). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu and Flexman’s to display image with correct scale as taught by Chu, to offer user the capability to produce interactive, immersive and live-like media contents enriched with a variety of graphical content. Claim 5, 7 and 10-13 are rejected under 35 U.S.C. 103 as being unpatentable over Vu U.S. Patent Application 20220379474 in view of Flexman U.S. Patent Application 20190371012, in view of Chu U.S. Patent Application 20170186235, and further in view of Kitchen U.S. Patent Application 20210170676. Regarding claim 5, Vu as modified by Flexman and Chu discloses all the features with respect to claim 4 as outlined above. However, Vu as modified by Flexman and Chu fails to disclose spatial axes are labeled with markings on the basis of which the distorted graphical information is converted into a graphical depiction in a plane. Kitchen discloses spatial axes are labeled with markings on the basis of which the distorted graphical information is converted into a graphical depiction in a plane (paragraph [0091]: the as-obtained image can be corrected for triangulation and offset from the axis normal to the bottom surface 228 of the layer 220 (for example using the registry marks 330 as shown and described in connection with FIG. 6 and based on angle α), which is also known as “perspective correction.”). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu, Flexman and Chu’s to perform perspective correction as taught by Kitchen, to detect and remediate manufacturing defects. Regarding claim 7, Vu as modified by Flexman, Chu and Kitchen discloses the monitoring device (1) according to claim 6, characterized in that line elements are marked in a graphical information in the form of a CAD depiction and serve for forming protection field contour suggestions (Kitchen’s paragraph [0102]: The reference binary image can be generated based on a CAD model or other input and corresponds to the input used to control the manufacturing of the layer by the additive manufacturing machine; Flexman’s paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106. The graphic processing module 110 is configured to generate a contextual overlay 156 which represents the areas needed for clearing during an image acquisition process so that the c-arm 154 may freely move without hitting any objects. The user when viewing the contextual overlay 156 on the augmented reality display device can easily determine if an object is within the region that requires clearance and may then remove the object from the region). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu, Flexman and Chu’s to perform perspective correction as taught by Kitchen, to detect and remediate manufacturing defects. Regarding claim 10, Vu as modified by Flexman, Chu and Kitchen discloses the monitoring device (1) according to claim 7, characterized in that based on protection field contour elements, a protection field contour suggestion is calculated, which is displayed on the graphical drawing interface (Flexman’s paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106. The graphic processing module 110 is configured to generate a contextual overlay 156 which represents the areas needed for clearing during an image acquisition process so that the c-arm 154 may freely move without hitting any objects. The user when viewing the contextual overlay 156 on the augmented reality display device can easily determine if an object is within the region that requires clearance and may then remove the object from the region; Vu’s paragraph [0167]: FIG. 5... The volumes surrounding the moving objects (determined by their position and estimated trajectories) that are deemed unsafe for volumes of nearby objects to overlap are continuously checked for collisions or movements that bring them on a potential collision course). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu, Flexman and Chu’s to perform perspective correction as taught by Kitchen, to detect and remediate manufacturing defects. Regarding claim 11, Vu as modified by Flexman, Chu and Kitchen discloses the monitoring device (1) according to claim 10, characterized in that measurement data of the safety sensor (4) is used for calculating a protection field suggestion (Vu’s paragraph [0050]: FIG. 1, which illustrates a representative 3D workcell 100 monitored by a plurality of cameras representatively indicated at 1021 and 1022; paragraph [0061]: If the system is be configured with some degree of high-level information about the machinery being controlled (for purposes of control routines 350, for example)—such as the location(s) of dangerous part or parts of the machinery and the stopping time and/or distance—analysis module 342 may be configured to provide intelligent feedback as to whether the cameras are providing sufficient coverage, and suggest placement for additional cameras; paragraph [0167]: FIG. 5... The volumes surrounding the moving objects (determined by their position and estimated trajectories) that are deemed unsafe for volumes of nearby objects to overlap are continuously checked for collisions or movements that bring them on a potential collision course). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu, Flexman and Chu’s to perform perspective correction as taught by Kitchen, to detect and remediate manufacturing defects. Regarding claim 12, Vu as modified by Flexman, Chu and Kitchen discloses the monitoring device (1) according to claim 10, characterized in that a protection field contour suggestion displayed on the graphical drawing interface is modifiable (Vu’s paragraph [0063]: continuous monitoring is performed to ensure that the observed background image is consistent with the space map 345 stored during the startup period. Background can also be updated if stationary objects are removed or are added to the workcell; paragraph [0167]: FIG. 5... The volumes surrounding the moving objects (determined by their position and estimated trajectories) that are deemed unsafe for volumes of nearby objects to overlap are continuously checked for collisions or movements that bring them on a potential collision course). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu, Flexman and Chu’s to perform perspective correction as taught by Kitchen, to detect and remediate manufacturing defects. Regarding claim 13, Vu as modified by Flexman, Chu and Kitchen discloses the monitoring device (1) according to claim 12, characterized in that a protection field contour suggestion displayed on the graphical drawing interface can be confirmed by inputs to the configuration unit (8), and in that a confirmed protection field contour suggestion is adopted as a protection field (7) in the safety sensor (4) (Vu’s paragraph [0167]: FIG. 5... The volumes surrounding the moving objects (determined by their position and estimated trajectories) that are deemed unsafe for volumes of nearby objects to overlap are continuously checked for collisions or movements that bring them on a potential collision course; Flexman’s paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106. The graphic processing module 110 is configured to generate a contextual overlay 156 which represents the areas needed for clearing during an image acquisition process so that the c-arm 154 may freely move without hitting any objects. The user when viewing the contextual overlay 156 on the augmented reality display device can easily determine if an object is within the region that requires clearance and may then remove the object from the region). Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Vu, Flexman and Chu’s to perform perspective correction as taught by Kitchen, to detect and remediate manufacturing defects. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yi Yang whose telephone number is (571)272-9589. The examiner can normally be reached on Monday-Friday 9:00 AM-6:00 PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /YI YANG/ Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jun 19, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586304
PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12567129
Image Processing Method and Electronic Device
2y 5m to grant Granted Mar 03, 2026
Patent 12561276
SYSTEMS AND METHODS FOR UPDATING MEMORY SIDE CACHES IN A MULTI-GPU CONFIGURATION
2y 5m to grant Granted Feb 24, 2026
Patent 12541902
SIGN LANGUAGE GENERATION AND DISPLAY
2y 5m to grant Granted Feb 03, 2026
Patent 12541896
COMPUTER-BASED CONTENT PERSONALIZATION OF A VISUAL DISPLAY
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
88%
With Interview (+17.2%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month