Prosecution Insights
Last updated: April 17, 2026
Application No. 18/809,153

METHOD FOR THE PHYSICAL, IN PARTICULAR OPTICAL, DETECTION OF AT LEAST ONE USAGE OBJECT

Non-Final OA §103
Filed
Aug 19, 2024
Examiner
HABIB, IRFAN
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
unknown
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
96%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
637 granted / 721 resolved
+30.3% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
36 currently pending
Career history
757
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
70.0%
+30.0% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
3.6%
-36.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 721 resolved cases

Office Action

§103
DETAILED ACTION 1. This office action is in response to U.S. Patent Application No.: 18/809,153 filed on with effective filing date 4/18/2019. Claims 1-14 are pending. Claim Rejections - 35 USC § 103 2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 4. Claim(s) 1-6 & 8-13 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. US 2019/0149724 A1 in view of Shamir et al. 2013/0034266 A1. Per claims 1 & 8, Wu et al. discloses a machine-implemented method for the physical detection of at least one usage object, the machine-implemented method comprising: performing, by a processing unit of a smart phone or tablet: subdividing or categorizing, by the processing unit of the smart phone or tablet, usage object data into individual object classes (para: 04 & 06, e.g. unmanned aerial vehicles (UAVs) can be used for performing surveillance, reconnaissance, and exploration tasks for various applications; a mobile platform can be outfitted with an imaging device for capturing images of a surrounding environment; exposing an imaging device mounted aboard the mobile platform at a plurality of positions of components of the mobile platform to obtain images; determining, using the images, allowed positions of the components that do not obstruct a field-of-view of the imaging device). Wu et al. fails to explicitly disclose the rest of the limitations as specified in claims 1 & 8. Shamir et al. however teaches classifying, by the processing unit of the smart phone or tablet, a usage object by comparing the individual object classes with at least one in a database of the processing unit and/or with a database of an external CPU, and the processing unit and/or the CPU and/or the user him/herself selects a database object corresponding to the characteristic value and displays the database object on a screen of the smart phone or tablet (para: 23 & 25, e.g. an object classifier determines an object model respective of each of the objects of interest, according to the dynamic spatial and spectral characteristics associated with each object of interest; the object classifier then classifies the objects of interest according to the object models associated with the objects of interest and according to object lo models of known objects stored in a database), so that a camera image of the usage object and the database object are shown on the screen at least partially optically superimposed and/or juxtaposed (para: 55, e.g. object tracker 106 may also superimpose a representation of the tracked objects on a 2D or 3D representation of the scene; this representation of the scene along with the superimposed representations of the tracked objects may be displayed on a display (not shown)), and wherein the processing unit is configured to enable a detection process to be carried out by a user or implementation device, wherein the physical detection process comprises at least one temporal detection sequence, where during the detection sequence, at least two different images of the usage object are captured, wherein each image is associated with at least one database object (para: 55, e.g. after selecting objects of interest, the at least two tracking sequence of the observed scene are acquired; the two tracking sequences may be acquired by a single imager sequentially positioned at different location in the scene or by two different imagers). Therefore, in view of disclosures by Shamir et al., it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Wu et al. and Shamir et al. in order to detect objects of interest in the scene, tracked and classified effectively according to the multi-spectral images acquired from several positions. Per claims 2 & 9, Wu et al. further teaches the machine-implemented method as claimed in claim 1, wherein a conversion unit is used to break down the usage object into individual object classes, which data classes are then individually or commonly compared with data or data classes correspondingly deposited in the database. Per claims 3 & 10, Shamir et al. further teaches the machine-implemented method as claimed in claim 1, wherein the data object is a template image of the corresponding usage object, which is deposited in the database (para: 31, e.g. object detector identifies objects in scene 116 according to the initial spatial and spectral characteristics corresponding to the segments of interest). Per claims 4 & 11, Shamir et al. further teaches the machine-implemented method as claimed in claim 3, wherein once the characteristic value of the usage object has been determined, selecting on the basis of the characteristic value, the corresponding database object which can optically image the usage object, and displaying the optical image on a display next to the actually captured usage object (para: 31, e.g. object detector identifies objects in scene 116 according to the initial spatial and spectral characteristics corresponding to the segments of interest). Per claims 5 & 12, Shamir et al. further teaches the machine-implemented method as claimed in claim 4, wherein the usage object is detected in a user and/or an implementation device in such a way that an image of the usage object detected by the detection process, is displayed at the same time as the database object shown on the display in an identical manner or in a manner identical to scale (para: 55-56, e.g. the object tracker provides the dynamic spatial and spectral characteristics respective of each tracked object to an object classifier). Per claims 6 & 13, Shamir et al. further teaches the machine-implemented method as claimed in claim 5, wherein the image of the usage object is displayed in a close approximation to a usage object correspondingly deposited in the database, on the basis of the characteristic value and/or an optical dimension of the usage object (para: 23 & 25, e.g. an object classifier determines an object model respective of each of the objects of interest, according to the dynamic spatial and spectral characteristics associated with each object of interest; the object classifier then classifies the objects of interest according to the object models associated with the objects of interest and according to object lo models of known objects stored in a database). Allowable Subject Matter 5. Claims 7 & 14 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Russell US 2018/0244387 A1, e.g. an unmanned aerial vehicle, UAV, is operable in an autonomous mode. The UAV comprises an upwards-configurable sensor an actuator and a controller. Ohtomo et al. US 2014/0240498 A1, e.g. an aerial photographing system, comprising a flying vehicle being remotely controlled, a camera. 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRFAN HABIB whose telephone number is (571)270-7325. The examiner can normally be reached Mon-Th 9AM-7PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached on 5712722988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Irfan Habib/ Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Aug 19, 2024
Application Filed
Oct 18, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593047
METHOD AND APPARATUS FOR IMAGE ENCODING AND DECODING USING TEMPORAL MOTION INFORMATION
2y 5m to grant Granted Mar 31, 2026
Patent 12569313
HANDS-FREE CONTROLLER FOR SURGICAL MICROSCOPE
2y 5m to grant Granted Mar 10, 2026
Patent 12568241
IMPROVEMENT OF BI-PREDICTION WITH CU LEVEL WEIGHT (BCW)
2y 5m to grant Granted Mar 03, 2026
Patent 12568198
3D Display Method AND 3D Display Device
2y 5m to grant Granted Mar 03, 2026
Patent 12563216
METHODS AND DEVICES FOR ENHANCING BLOCK ADAPTIVE WEIGHTED PREDICTION WITH BLOCK VECTOR
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
96%
With Interview (+7.8%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 721 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month