Prosecution Insights
Last updated: April 19, 2026
Application No. 18/691,356

Image Pick-Up System and Image Pick-Up Method

Non-Final OA §102§103
Filed
Mar 12, 2024
Examiner
BEZUAYEHU, SOLOMON G
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Shimadzu Corporation
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
464 granted / 618 resolved
+13.1% vs TC avg
Strong +31% interview lift
Without
With
+30.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
30 currently pending
Career history
648
Total Applications
across all art units

Statute-Specific Performance

§101
16.0%
-24.0% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 618 resolved cases

Office Action

§102 §103
DETAILED ACTION Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Gopinath et al. (Pub. No. US 2009/0015670). Regarding claim 20, Gopinath teaches an image pick-up method of picking up, with a camera (pan tilt zoom dome camera), an image of an object arranged in a real space (area) [Para. 2 “an Tilt Zoom (PTZ) dome cameras rotate, tilt and zoom during operation to provide surveillance of an area.”], the image pick-up method comprising: Setting/establish a first range (privacy zone or rectangle) in the real space [Para. 20 “To mask an object on the camera's video display and have the object remain hidden when the camera Pans, Tilts and Zooms, one can establish a privacy zone or rectangle having adjustable height and width so that it will cover the object which is to remain hidden”]; obtaining a position and a posture (Pan angle; tilt angle; or zoom position) of the camera [Para. 25 “Once the privacy zone is established, the PTZ dome camera can perform Pan, Tilt, and Zoom operations and the camera can change from the current position, such that a new screen is displayed. For the new screen, at any instant of time, the known information is the Pan angle, Tilt angle and the Zoom position”]; determining whether an object included in an image picked up by the camera is included in the first range based on the position and the posture of the camera [Para. 21 “For each new screen or video display produced by the camera's movement at any point in time, it is necessary to know the angular co-ordinates of the object to be masked and, if the object is in the video display, it is necessary to create a new privacy zone by redrawing a masked area or blanking rectangle/quadrilateral over this object”; Para. 85 “In Step 1, at the start of the process, translate the four corners of the current screen to angular coordinates, knowing the present Zoom, Pan, Tilt values” and Para. 86 “check each privacy zone to see if it has any overlap with the screen, such that the privacy zone should be displayed on the screen.”]; and performing image processing ((masked area) or rectangular fills) on the image picked up by the camera, the image processing selected depending upon a result of determination in the determining [Para. 21 “it is necessary to create a new privacy zone by redrawing a masked area or blanking rectangle/quadrilateral over this object”; para. 22 “In the first two cases, the masked area will be shown on the screen, and in the last event, the current screen will not have the privacy zone”; Para. 92 “the privacy zones are drawn using pixel maps (and pixel coordinates translated in Step 2). Rectangular fills of each zone is done”. In this case, the term “selected” doesn’t mean there is a selection step from multiple processing methods; rather, it determines whether the process happens or not. Therefore, the cited portion of the prior art reads on the claim limitation]. Claim 1 is rejected for the same reasons as claim 20. Furthermore, Gopinath teaches a system, camera, and controller to perform the claim limitations [fig. 2, 3 and related description]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al. (Pub. No. US 2009/0015670) in view of Systrom et al. (Pub. No. US 2014/0078172). Regarding claim 2, Gopinath teaches wherein the image processor is configured to perform mask processing (redrawing a masked area or blacking rectangle/quadrilateral) on the image (video display) picked up by the first camera, aiming at an object [Para. 21]. However, Gopinath doesn’t explicitly teach object determined as not being included in the first range by the determination unit. Systrom teaches object determined as not being included in the first range (area in the image to emphasize) by the determination unit (rest of the image) [Para. 2 “some existing image processing applications offer a post-processing feature to mimic the effects of a shallow DOF. After a picture is taken and stored to disk, the user selects an area in the image to emphasize (e.g., by drawing a boundary box using the mouse), and the software applies blurring effects to the rest of the image”]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Systrom; because the modification enables the system to achieve mask processing that targets portions outside a selected rage while maintaining a selected range unmasked. Regarding claim 19, Gopinath doesn’t explicitly teach the claim limitation. However, Systrom teaches further comprising a display device, wherein the controller is configured to have the display device show the image resulting from the mask processing by the image processor [fig. 2, 3 and related description]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Systrom; because the modification enables the system to achieve mask processing that targets portions outside a selected rage while maintaining a selected range unmasked. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al. (Pub. No. US 2009/0015670) in view of CHEN et al. (Pub. No. US 2010/0075343). Regarding claim 3, Gopinath doesn’t explicitly teach the claim limitation. However, CHEN et al. (Pub. No. US 2010/0075343) teaches wherein the first camera includes an inertial sensor [Para. 32], and the obtaining unit is configured to obtain based on a detection value (measurement) from the inertial sensor, the posture of the first camera with respect to the posture of the first camera at camera (upon system startup) time of setting of the first range [Para. 42 and 53]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by CHEN; because the modification enables the system to achieve inertial measurement-based camera pose changes relative to an initial pose for use in updating privacy zone placement during camera movement. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al. (Pub. No. US 2009/0015670) in view of Lao et al. (Pub. No. US 2004/0208114). Regarding claim 4, Gopinath doesn’t explicitly teach the claim limitation. However, Lao teaches wherein the first camera includes a position sensor [Para. 147], and the obtaining unit is configured to obtain the position of the first camera based on a detection value (detection data) from the position sensor [Para. 147]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Lao; because the modification enables the system to capture using sensor-based position information for the camera while performing the privacy zone masking operations. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al. (Pub. No. US 2009/0015670) in view of Trajkovic (Pub. No. US 20020167537). Regarding claim 5, Gopinath teaches wherein the obtaining unit is configured to obtain at least one of the positions and the posture of the first camera [Para. 94]. However, Gopinath doesn’t explicitly teach the rest of claim limitation. Trajkovic teaches it is based on an amount of change (movement) of the position in the image of the object (feature point) included in the image picked up by the first camera [Para. 4, 8, and 9]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Trajkovic; because the modification enables the system to improves motion-based tracking for a non-stationary camera by compensating for unknown field of view changes via image alignment to minimize camera motion artifacts. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al. (Pub. No. US 2009/0015670) in view of Trajkovic (Pub. No. US 20020167537) further in view of Chandraker et al. (Pub. No. US 20140078258). Regarding claim 6, Gopinath in view of Trajkovic doesn’t explicitly teach the claim limitation. However, Chandraker teaches wherein the obtaining unit is configured to obtain the position and the posture of the first camera based on at least one of a visual simultaneous localization and mapping (SLAM) technology, a structure from motion (SfM) technology, and a visual odometry (VO) technology [Para. 3, 5, 18, and 26]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath in view of Trajkovic to teach the claim limitation, feature as taught by Chandraker; because the modification enables the system to improves motion-based tracking for a non-stationary camera by compensating for unknown field of view changes via image alignment to minimize camera motion artifacts. Claims 7-11 are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al. (Pub. No. US 2009/0015670) in view of Goto et al. (Pub. No. US 2015/0070389). Regarding claim 7, Gopinath doesn’t explicitly teach the claim limitations. Goto teaches wherein the obtaining unit is configured to extract a marker included in the image and obtains the position and the posture of the first camera based on an amount of change from a reference shape of the marker [Para. 133]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Goto; because the modification enables the system to improves motion-based tracking for a non-stationary camera by compensating for unknown field of view changes via image alignment to minimize camera motion artifacts. Regarding claim 8, Gopinath doesn’t explicitly teach the claim limitations. Goto teaches a second camera that picks up an image of the first camera, wherein the obtaining unit is configured to obtain the position and the posture of the first camera based on an amount of change from a reference shape of the first camera included in the image picked up by the second camera [Para. 22, 34 and 44]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Goto; because the modification enables the system to improves motion-based tracking for a non-stationary camera by compensating for unknown field of view changes via image alignment to minimize camera motion artifacts. Regarding claim 9, Gopinath doesn’t explicitly teach the claim limitations. Goto teaches wherein the determination unit is configured to determine whether the object included in the image is included in the first range based on a distance between the object arranged in the real space and the first camera [fig. 2, 3 and related description]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Goto; because the modification enables the system to improves motion-based tracking for a non-stationary camera by compensating for unknown field of view changes via image alignment to minimize camera motion artifacts. Regarding claim 10, Gopinath doesn’t explicitly teach the claim limitations. Goto teaches a distance sensor that detects the distance between the object arranged in the real space and the first camera [fig. 2, 3 and related description]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Goto; because the modification enables the system to improves motion-based tracking for a non-stationary camera by compensating for unknown field of view changes via image alignment to minimize camera motion artifacts. Regarding claim 11, Gopinath doesn’t explicitly teach the claim limitations. Goto teaches wherein the controller is configured to estimate the distance between the object arranged in the real space and the first camera based on an estimation model generated by machine learning and the image picked up by the first camera [fig. 3 and related description]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Goto; because the modification enables the system to improves motion-based tracking for a non-stationary camera by compensating for unknown field of view changes via image alignment to minimize camera motion artifacts. Claims 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al. (Pub. No. US 2009/0015670) in view of Pillai et al. (Pub. No. US 2020/0090359). Regarding claim 12, Gopinath doesn’t explicitly teach the claim limitations. However, Pillai teaches wherein the determination unit is configured to determine whether or not the object included in the image picked up by the first camera is included in the first range based on the position and the posture of the first camera [Para. 9, 10, 33 and 37]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Pillai; because the modification enables the system to improves monocular distance estimation by using a machine learning disparity model to produce more accurate depth estimates of objects from a single picked up image. Regarding claim 13, Gopinath doesn’t explicitly teach the claim limitations. However, Pillai teaches an input device, wherein the input device is configured to accept position information of the first range with respect to the position and the posture of the first camera obtained by the obtaining unit, and the setting unit is configured to set the first range based on the position information of the first range [Para. 9, 12, and 23]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Pillai; because the modification enables the system to improves monocular distance estimation by using a machine learning disparity model to produce more accurate depth estimates of objects from a single picked up image. Regarding claim 14, Gopinath doesn’t explicitly teach the claim limitations. However, Pillai teaches an input device, wherein the controller is configured to detect a plane perpendicular to a vertical direction in the image picked up by the first camera, and when a coordinate in an image inputted by a user is included in the detected plane, the setting unit is configured to set as the first range, a region defined based on the coordinate [Para. 14, 33, and 49]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Pillai; because the modification enables the system to improves monocular distance estimation by using a machine learning disparity model to produce more accurate depth estimates of objects from a single picked up image. Regarding claim 15, Gopinath doesn’t explicitly teach the claim limitations. However, Pillai teaches wherein the setting unit is configured to extract a marker included in the image and sets the first range based on an amount of change from a reference shape of the marker [Para. 11, 35 and 38]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Pillai; because the modification enables the system to improves monocular distance estimation by using a machine learning disparity model to produce more accurate depth estimates of objects from a single picked up image. Claims 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Gopinath et al. (Pub. No. US 2009/0015670) in view of Romanowich (Pub. No. US 2008/0084473). Regarding claim 16, Gopinath doesn’t explicitly teach the claim limitations. However, Romanowich teaches comprising a storage, wherein specifying information representing the real space is stored in the storage, and the setting unit is configured to set the first range based on the specifying information [Para. 31-33]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Romanowich; because the modification enables the system to improves surveillance response and situational awareness by having a smart camera store a map of the monitored area and provide a corresponding map portion. Regarding claim 17, Gopinath doesn’t explicitly teach the claim limitations. However, Romanowich teaches wherein the controller is configured to create the specifying information based on a visual simultaneous localization and mapping (SLAM) technology or a structure from motion (SfM) technology [Para. 31, 43 and 45]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Romanowich; because the modification enables the system to improves surveillance response and situational awareness by having a smart camera store a map of the monitored area and provide a corresponding map portion. Regarding claim 18, Gopinath doesn’t explicitly teach the claim limitations. However, Romanowich teaches wherein the setting unit is configured to change the first range that has been set based on an input from a user [Para 31 and 42]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gopinath to teach the claim limitation, feature as taught by Romanowich; because the modification enables the system to improves surveillance response and situational awareness by having a smart camera store a map of the monitored area and provide a corresponding map portion. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOLOMON G BEZUAYEHU whose telephone number is (571)270-7452. The examiner can normally be reached on Monday-Friday 10 AM-7 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O’Neal Mistry can be reached on 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-0101 (IN USA OR CANADA) or 571-272-1000. /SOLOMON G BEZUAYEHU/ Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Mar 12, 2024
Application Filed
Jan 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602717
APPARATUS, METHOD, AND COMPUTER-READABLE STORAGE MEDIUM FOR CONTEXTUALIZED EQUIPMENT RECOMMENDATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602946
DOCUMENT CLASSIFICATION USING UNSUPERVISED TEXT ANALYSIS WITH CONCEPT EXTRACTION
2y 5m to grant Granted Apr 14, 2026
Patent 12591350
TECHNIQUES FOR POSITIONING SPEAKERS WITHIN A VENUE
2y 5m to grant Granted Mar 31, 2026
Patent 12586355
ROAD AND INFRASTRUCTURE ANALYSIS TOOL
2y 5m to grant Granted Mar 24, 2026
Patent 12561852
Cross-Modal Contrastive Learning for Text-to-Image Generation based on Machine Learning Models
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+30.9%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 618 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month