Prosecution Insights
Last updated: April 19, 2026
Application No. 17/772,649

INFORMATION PROVIDING APPARATUS, INFORMATION PROVIDING METHOD, INFORMATION PROVIDING PROGRAM, AND STORAGE MEDIUM FOR A PASSENGER IN A VEHICLE

Final Rejection §103
Filed
Apr 28, 2022
Examiner
DUFFY, CAROLINE TABANCAY
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Pioneer Corporation
OA Round
4 (Final)
80%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
62 granted / 78 resolved
+17.5% vs TC avg
Strong +27% interview lift
Without
With
+26.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
13.8%
-26.2% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 78 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Amendment The Amendment filed 09/05/2025 has been entered. Claims 1, 3-8 remain pending. Claims 2, 9, and 10 are cancelled. Claims 11-12 are new. Response to Arguments Applicant's arguments filed 09/05/2025 have been fully considered but they are not persuasive. Applicant argues that Kim does not teach “that indicating the ‘desired object’ is carried out via ‘a visual salience technique.’” However, Anderson is relied upon to teach this limitation with Kim relied upon to support extracting “a plurality” of areas of interest in the same field of endeavor. Anderson teaches the extracting an area of interest “in the captured image by using a visual salience technique.” Under the broadest reasonable interpretation, a visual salience technique is a technique used to extract an area of interest based on visual salience. Via Encyclopedia of the Human Brain, published 2002, “Salience refers to the perceptual prominence of an object relative to its background.” The Encyclopedia of the Human Brain also discloses “The salience of a visual target is determined by the degree to which the features of the target differ from those of the surroundings.” Additionally, Itti et al. (A model of Saliency-Based Visual Attention for Rapid Scene Analysis, published 1998) discloses “The purpose of the saliency map is to represent the conspicuity—or “saliency”—at every location in the visual field by a scalar quantity and to guide the selection of attended locations, based on the spatial distribution of saliency.” The specification of the instant application provides an example of a visual salience technique ([0020], “the area extracting unit 324 extracts the area of interest in the captured image by using a so-called visual salience technique. More specifically, the area extracting unit 324 extracts the area of interest in the captured image by image recognition (image recognition using artificial intelligence (AI)) using a first learning model described below.”) However, as currently amended, Claim 1 does not require the visual saliency technique to include an artificial intelligence learning model. Thus, as best understood in light of the specification, “visual salience” is the conspicuity of a feature compared to other features in the image. Anderson discloses an object detector that may use algorithms including shape detection to detect objects. A shape of an object is a visual feature that may distinguish the object from other features in the image. Thus, shape detection as described by Anderson is a visual salience technique. Anderson, [0027] discloses “In some embodiments, object detector 114 may use algorithms, such as shape detection, to process images from camera 112. In other embodiments, object detector 114 may use positional data, such as may be obtained from a GPS receiver, to determine a point or points of interest within the path defined by extending sightline 124 from the location of the vehicle. Such a positional approach may, in embodiments, be used to supplement or replace a visual object detection.” Visual object detection by shape detection is a visual salience technique. Thus, Anderson teaches a visual salience technique. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, and 5-8 are rejected under 35 U.S.C. 103 as being unpatentable over Anderson (US 2019/0047582 A1) in view of Kim et al. (US 2014/0145931 A1). Regarding Claim 1, Anderson teaches “An information providing apparatus, comprising a controller” (Anderson, [0031] discloses “At least some portions of apparatus 100 may be implemented using hardware. Such hardware devices may include application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), discrete components, programmable controllers, general purpose programmable CPUs, or any other hardware technology now known or later developed that is capable of providing the necessary logic”; where hardware such as a programmable CPU is a controller) “configured to: capture an image including surroundings of a moving body” (Anderson, [0017] discloses “An object detector 114 may detect the selected object 104 within the overlapping field of view portion 108 from a feed from the camera 112”; where a camera is an image capture device and thus captures images; Anderson, [0017] also discloses “Apparatus 100 may be part of or associated with a vehicle. Fields of view 106 and 110 may be at least partially through a windshield 120 of a vehicle”; where a vehicle is a moving body; where fields of view through a windshield are surroundings of a moving body); “extract by using a visual salience technique” (Anderson, [0017] discloses “An object detector 114 may detect the selected object 104 within the overlapping field of view portion 108 from a feed from the camera 112”; where an object detector extracts areas of interest; where a selected object is an area of interest. Anderson, [0024] also discloses “Gesture detector 102 may pass information about detected gestures to object detector 114. This information may include spatial information about the relative location and orientation of user 101 to camera 112, as well as a vector indicating the sightline 124 of user 101, which may be transformed through the field of view 110 of camera 112 to determine whether the user 101 is focusing on an object 104 within camera 112's field of view”; where a sightline is a line of sight. Anderson, [0027] discloses “In some embodiments, object detector 114 may use algorithms, such as shape detection, to process images from camera 112”; where visual object detection by shape detection is a visual salience technique. ); “detect a posture of a passenger in the moving body” (Anderson, [0017] discloses “FIG. 1 depicts an example apparatus 100 that may include a gesture detector 102 to detect an action or posture from a user 101”; where a gesture detector is a posture detecting unit); “(Anderson, [0017] discloses “An object recognizer 116 may perform object recognition on selected object 104 from the feed and present the user with at least one action to take based upon the detected object 104”; where performing object recognition is recognizing an object; where the object recognizer takes input from the gesture detector via the object detector, thus basing the identification on the gesture (posture)); “and provide object information related to the object included in the area of interest to the passenger in the moving body” (Anderson, [0029] discloses “Object recognizer 116 may, based upon the nature of the recognized object 104, provide relevant options to user 101 via an interface 122. Interface 122 may include a display connected to an infotainment system, and/or a voice module for providing aural alerts and options”; where an interface is an information providing unit; where providing aural alerts and options is providing object information). Although Anderson discloses use of multiple gesture tracking cameras (Anderson, Fig. 3, element 222 and [0038] discloses “Environment group 204 may include an internal sensor array 222 and external sensors 224, which may correspond to device 118 and camera 112, respectively, and other such devices that allow system 200 to recognize user 101 gestures or postures and respond accordingly,”) Anderson does not explicitly teach “extract a plurality of areas of interest in the captured image” and “identify one area of interest of the plurality of areas of interest based on the posture” (emphasis added). However, in an analogous field of endeavor, Kim discloses “extract a plurality of areas of interest in the captured image,” (Kim, [0052] discloses “Referring to FIG. 3, in a situation in which a user drives a vehicle while intuitively gazing at leading vehicles 311 and 312, the user may indicate a desired object, for example, the leading vehicle 311 with a stationary hand motion”; where multiple leading vehicles are a plurality of areas of interest; where objects where a user is ‘intuitively gazing’ is a line of sight) and “identify one area of interest of the plurality of areas of interest based on the posture” (Kim, [0052] discloses “Referring to FIG. 3, in a situation in which a user drives a vehicle while intuitively gazing at leading vehicles 311 and 312, the user may indicate a desired object, for example, the leading vehicle 311 with a stationary hand motion”; where a desired object is one area of interest of the plurality of areas of interest). PNG media_image1.png 608 676 media_image1.png Greyscale Fig. 3 of Kim It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Anderson to incorporate the teachings of Kim by detecting multiple objects within a user’s LOS and then selecting only one object as identified by a stationary hand motion. One of ordinary skill in the art would be motivated to combine the Anderson and Kim references in order to allow the driver to avoid continuously verifying information of an object in their line of sight to allow the driver to focus on driving: Kim, [0047] discloses “A driver may drive a vehicle without continuously verifying information associated with driving or information associated with an augmented object displayed on the HUD because the driver may need to simultaneously drive and verify conditions ahead and an environment around the vehicle. When an LOS of the driver is directed toward the augmented object on a windshield of the vehicle while concentrating on driving, the apparatus for controlling a multi-modal HMI may minimize degrees of an LOS dispersion and a unfamiliarity by augmenting the object at an actual location of the leading vehicle.” Additionally, at the cited limitations, Kim differs from Anderson only in the number of objects identified; combining the Kim and Anderson references provides the benefit of identifying objects in a scene containing multiple objects while reducing computation time. Accordingly, the combination of Anderson and Kim discloses the invention of Claim 1. Regarding Claim 5, the combination of Anderson and Kim teaches “The information providing apparatus according to claim 1, wherein the controller is further configured to: obtain request information from the passenger in the moving body requesting the object information to be provided” (Kim, [0050] discloses “For example, when a user points to an object or gestures using a stationary motion to indicate an object, and concurrently commands object recognition with a voice through a voice recognition mode, the apparatus for controlling a multi-modal HMI may track the corresponding object”; where commanding object recognition with voice is requesting object information), “and provide the object information in response to the request information” (Kim, [0051] discloses “FIG. 3 is a view illustrating an example of selecting an object located ahead of a vehicle in which an apparatus for controlling a multi-modal HMI is installed, and displaying object-related information of the selected object according to an embodiment of the present invention”; where object-related information is object information). The proposed combination as well as the motivation for combining the Anderson and Kim references presented in the rejection of claim 1, apply to claim 5 and are incorporated herein by reference. Thus, the apparatus recited in claim 5 is met by Anderson and Kim. Regarding Claim 6, the combination of Anderson and Kim teaches “The information providing apparatus according to claim 5, wherein the request information is voice information related to voice spoken by the passenger” (Kim, [0050] discloses “For example, when a user points to an object or gestures using a stationary motion to indicate an object, and concurrently commands object recognition with a voice through a voice recognition mode, the apparatus for controlling a multi-modal HMI may track the corresponding object”), “wherein the controller is further configured to: analyze the voice information” (Kim, [0034] discloses “the present invention may receive voice information recognized by a voice recognizer 220 and gesture information recognized by a gesture recognizer 230, and generate a multi-modal control signal”; where voice recognizing is analyzing voice information), “extract a subset of the plurality of areas of interest” (Kim, [0037] discloses “The apparatus for controlling a multi-modal HMI may further include an object recognizer 250 to recognize an object located ahead of a vehicle being driven by the user, and a lane recognizer 260 to recognize a lane in which a vehicle corresponding to the object is traveling”; where a vehicle and lane in which a vehicle is traveling is a subset of the plurality of areas of interest), “recognize objects respectively included in the subset of the plurality of areas of interest” (Anderson, [0017] discloses “An object recognizer 116 may perform object recognition on selected object 104 from the feed and present the user with at least one action to take based upon the detected object 104”; where an object recognizer is an object recognizing unit; where performing object recognition is recognizing objects. Kim, page 15, paragraph 3 discloses “For example, the processor 200 recognizes the finger of the user 500 appearing in the indoor image as a gesture, and recognizes the position P2 of the front window 12A where the extension line L3 of the direction in which the recognized finger is directed, and a predetermined area including the calculated position P2 can beset as the selection area S2”; where objects in selection areas S1 and S2 are a plurality of areas of interest), “and provide the object information related to any one object of the objects respectively included in the subset of the plurality of areas of interest, on the basis of a result of the analysis of the voice information” (Kim, [0059] discloses “The apparatus for controlling a multi-modal HMI may further include a multi-modal engine unit to synthetically control a driver gesture motion recognition and a driver voice recognition, and a rendering engine to calculate and display a focal distance between an object and a driver projected on a glass windshield of a vehicle, based on an interior and an exterior of the vehicle and an LOS recognition of the driver”; where projecting a focal distance between an object and a driver is providing object information related to one object; where driver voice recognition is analysis of voice information). The proposed combination as well as the motivation for combining the Anderson and Kim references presented in the rejection of claim 1, apply to claim 6 and are incorporated herein by reference. Thus, the apparatus recited in claim 6 is met by Anderson and Kim. Regarding Claim 7, Claim 7 recites a method with steps corresponding to the elements of the system recited in Claim 1. Therefore, the recited steps of this claim are mapped to the proposed combination in the same manner as the corresponding elements in its corresponding system claim. Additionally, the rationale and motivation to combine the Anderson and Kim references, presented in rejection of claim 1, apply to this claim. Regarding Claim 8, Claim 8 recites a non-transitory computer-readable storage medium storing a program with instructions corresponding to the units recited in claim 1. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding system claim. Additionally, the rationale and motivation to combine the Anderson and Kim references, presented in rejection of claim 1, apply to this claim. Finally, the combination of Anderson and Kim references discloses “A non-transitory computer-readable storage medium having stored therein an information providing program for causing a computer to execute” (Anderson, [0045] discloses “Furthermore, the present disclosure may take the form of a computer program product embodied in any tangible or non-transitory medium of expression having computer-usable program code embodied in the medium”). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Anderson (US 2019/0047582 A1) in view of Kim et al. (US 2014/0145931 A1), further in view of Kito et al. (US 2019/0169917 A1). Regarding Claim 3, the combination of Anderson and Kim discloses “The information providing apparatus according to claim 1, wherein the captured image includes a subject that is the passenger in the moving body” (Kim, Fig. 3 and [0051] discloses “Referring to FIG. 3, in a situation in which a user drives a vehicle while intuitively gazing at leading vehicles 311 and 312, the user may indicate a desired object, for example, the leading vehicle 311 with a stationary hand motion”; where a user driving a vehicle is a passenger; see in Fig. 3 the hand of the user is within the field of view), “and the controller detects the posture (Anderson, [0020] discloses “A gesture could be any physical movement by a user that can be identified as signaling a selection of an object. Some examples could include pointing, nodding, swiping, circling, or other similar movements. A change in posture may include changes such as nodding or head turning, and more commonly will include eye movements as user 101 looks at surrounding objects”; where the head and arms (as indicated by a pointing gesture) are parts of the skeleton). The proposed combination as well as the motivation for combining the Anderson and Kim references presented in the rejection of Claim 1, apply to Claim 3 and are incorporated herein by reference. Although Anderson teaches detecting a user’s limb and head movements, the combination of Anderson and Kim does not explicitly teach “detecting a skeleton of the passenger, on the basis of the captured image.” However, in an analogous field of endeavor, Kito teaches “detecting a skeleton of the passenger, on the basis of the captured image” (Kito, Fig. 4 and [0034] discloses “In the present embodiment, the recognition unit 212 extracts joints (feature points) of each of portions of passenger's body (upper body) reflected in the captured image and generates skeleton information (skeleton data). Then, the recognition unit 212 recognizes the action of the passenger based on the generated skeleton information”; where generating skeleton information is detecting a skeleton of a passenger). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Anderson and Kim to incorporate the teachings of Kito by generating skeleton information. One of ordinary skill in the art would be motivated to combine the Anderson, Kim, and Kito references at the explicit teachings of Anderson to include a technology that allows detecting user movements (Anderson, [0021] discloses “Device 118 may be implemented with any suitable technology now known or later developed that allows device 118 to correctly detect user 101's movements, gestures, posture changes and/or eye movements. Such examples may include a digital camera, an infrared detector, a multi-point infrared detector, or similar such technologies. In some embodiments, device 118 may provide 3-D spatial data about the position of user 101”). Accordingly, the combination of Anderson, Kim, and Kito discloses the invention of Claim 3. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Anderson (US 2019/0047582 A1) in view of Kim et al. (US 2014/0145931 A1) – henceforth referred to as Kim 2014, further in view of Kim et al. (KR 20160134075 A) – henceforth referred to as Kim 2016. Regarding Claim 4, the combination of Anderson and Kim 2014 disclose “The information providing apparatus according to claim 1,wherein the controller is further configured to: obtain positional information related to a position of the moving body” (Kim 2014, [0036] discloses “The LOS recognizer 240 may calculate a focal distance at which the user gazes at a selected object, based on a movement speed of the selected object, and also calculate the focal distance at which the user gazes at the selected object, based on a distance between the selected object and a vehicle being driven by the user”; where distance between an object and a vehicle being driven by the user is positional information related to a position of the moving body) “and The combination of Anderson and Kim 2014 does not explicitly teach “obtain facility information related to a facility, wherein the controller recognizes the object included in the area of interest based on the positional information and the facility information.” However, in an analogous field of endeavor, Kim 2016 teaches “obtain facility information related to a facility, wherein the controller recognizes the object included in the area of interest based on the positional information and the facility information” (Kim 2016, page 18, paragraph 5 discloses “For example, the processor 200 extracts information corresponding to the selected first building 1004 from the entire road and facility information included in the map information, and outputs the extracted information in a form recognizable by the user”; where a building is a facility; where map information includes positional information and facility information). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Anderson and Kim 2014 to incorporate the teachings of Kim 2016 by extracting information of a building. One of ordinary skill in the art would be motivated to combine the Anderson, Kim 2014, and Kim 2016 references in order provide object information to the user about other types of objects that may exist in the driving scene: (Kim 2016, page 7, paragraph 3 discloses “Examples of objects appearing in the traveling image include, for example, other vehicles, pedestrians, lanes, facilities (e.g., buildings), and the like. That is, if it exists around the vehicle 1, it can appear in the running image as an object irrespective of its type”). Accordingly, the combination of Anderson, Kim 2014 and Kim 2016 discloses the invention of Claim 4. Claims 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Anderson (US 2019/0047582 A1) in view of Kim et al. (US 2014/0145931 A1), further in view of Scott, II et al. (US 2020/0174559 A1). Regarding Claim 11, the combination of Anderson and Kim does not explicitly teach “The information providing apparatus of claim 1, wherein the visual salience technique includes using image recognition based on a learning model.” However, in an analogous field of endeavor, Scott, II teaches “The information providing apparatus of claim 1, wherein the visual salience technique includes using image recognition based on a learning model” (Scott, II, [0072] discloses “In some embodiments of the present invention, video processing system 404 generates a training data set of expected eye gaze patterns by aggregating the eye gaze behavior of individuals over a plurality of viewing sessions. In some embodiments of the present invention, the aggregation includes annotating zones within the video that the users most often tended to look at (e.g., objects, screen coordinates, etc.). In some embodiments of the present invention, the annotations are performed for each video from or for each video sub-scene”; where aggregating eye gaze behavior is visual salience technique; where annotating training data of a machine learning model generated based on eye gaze behavior is a visual salience technique using image recognition based on a learning model). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Anderson and Kim to incorporate the teachings of Scott, II by incorporating a training data set of a machine learning model based on eye gaze behavior. The prior art Anderson contained a ‘base’ method upon which the claimed invention can be seen as an ‘improvement.’ The prior art Scott, II contained a known technique that is applicable to the base method. That is, Scott, II teaches a technique of training a machine learning model using eye gaze data to determine visually salient features. One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in an improved system. That is, it would have been obvious that the eye gaze technique of Scott, II would identify areas of interest and thus may be applied to the object detector method of Anderson. Anderson discloses the object detector may use “algorithms” to detect objects; Scott, II teaches an algorithm based on eye gaze data and machine learning. Accordingly, the combination of Anderson, Kim, and Scott, II discloses the invention of claim 11. Regarding Claim 12, the combination of Anderson, Kim, and Scott, II discloses “The information providing apparatus of claim 11, wherein the learning model is obtained by machine learning areas using identified images, the identified images comprising a plurality of identified areas identified using an eye tracker, the plurality of identified areas corresponding to areas on which lines of sight of a subject are focused, wherein the plurality of identified areas are labelled beforehand” (Scott, II [0074] discloses “At block 604, a gaze point of the user is monitored and/or tracked as the user views one or more frames of the video (e.g., via eye gaze tracking component 410). At block 606, the orientation of the video is changed in response to a determination that the monitored gaze point of the user is different from a predetermined target gaze point (e.g., via video reorientation component 412, machine learning component 416, feedback component 416, etc.), in which the changing of the orientation includes repositioning the target gaze point of the video to the monitored gaze point of the user”; where tracking a gaze point of a user is identifying areas using an eye tracker. Scott, II [0072] also discloses “In some embodiments of the present invention, the aggregation includes annotating zones within the video that the users most often tended to look at (e.g., objects, screen coordinates, etc.)”; where annotating zones users look at is labelling identified areas beforehand. That is, annotating training data indicates the annotations are performed prior to training and implementation of the machine learning model). The proposed combination as well as the motivation for combining the Anderson, Kim, and Scott, II references presented in the rejection of Claim 11, apply to Claim 12 and are incorporated herein by reference. Thus, the apparatus recited in Claim 12 is met by Anderson, Kim, and Scott, II. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLINE TABANCAY DUFFY whose telephone number is (703)756-1859. The examiner can normally be reached Monday - Friday 8:00 am - 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at 5712723382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAROLINE TABANCAY DUFFY/ Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Apr 28, 2022
Application Filed
Jul 11, 2024
Non-Final Rejection — §103
Oct 15, 2024
Applicant Interview (Telephonic)
Oct 15, 2024
Examiner Interview Summary
Nov 18, 2024
Response Filed
Jan 23, 2025
Final Rejection — §103
Apr 17, 2025
Applicant Interview (Telephonic)
Apr 17, 2025
Examiner Interview Summary
Apr 28, 2025
Request for Continued Examination
May 05, 2025
Response after Non-Final Action
Jun 02, 2025
Non-Final Rejection — §103
Sep 05, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103
Dec 11, 2025
Applicant Interview (Telephonic)
Dec 11, 2025
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602753
ULTRASOUND IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12602788
METHOD AND SYSTEM FOR FULLY AUTOMATICALLY SEGMENTING CEREBRAL CORTEX SURFACE BASED ON GRAPH NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12597130
IMAGE PROCESSING APPARATUS, OPERATION METHOD OF IMAGE PROCESSING APPARATUS, AND OPERATION PROGRAM OF IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12580081
SYSTEMS AND METHODS FOR DIRECTLY PREDICTING CANCER PATIENT SURVIVAL BASED ON HISTOPATHOLOGY IMAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12567130
REAL-TIME BLIND REGISTRATION OF DISPARATE VIDEO IMAGE STREAMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+26.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 78 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month