Prosecution Insights
Last updated: April 19, 2026
Application No. 17/608,382

OBJECT IDENTIFICATION SYSTEM WITH FLAT CARRIER MEDIUM FOR ARRANGING ON A DEVICE

Non-Final OA §103
Filed
Nov 02, 2021
Examiner
DARDANO, STEFANO ANTHONY
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Audi AG
OA Round
7 (Non-Final)
77%
Grant Probability
Favorable
7-8
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
57 granted / 74 resolved
+15.0% vs TC avg
Strong +33% interview lift
Without
With
+33.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
22 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
49.3%
+9.3% vs TC avg
§102
18.0%
-22.0% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 74 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/06/26 has been entered. Claim 19 has been amended, claims 1-10, 21, and 23-31 have been canceled. Claims 11-20 and 22 are pending. Response to Arguments Applicant’s arguments with respect to claims 11-20 and 22 have been found convincing due to the amended language resulting in new claim scope (specifically regarding camera location in the vehicle). An updated search has been made and additional art has been provided in this rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 11, 13-16, 18-20, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over KLEINDIENST et al.( US 20220113551 A1 Hereinafter “KLEINDIENST”) in view of DALAL et al. (US 20130141574 A1 Hereinafter “DALAL”) in further view of Wieczorek et al. (US 20220172489 A1 Hereinafter “Wieczorek”). Regarding claim 19, KLEINDIENST discloses a motor vehicle, comprising: a vehicle interior ([0304]: “In this way, the exterior and/or interior region of the vehicle can be illuminated in a targeted manner in order to ensure a reliable detection even under poor lighting conditions); an object recognition system including at least one capturing device, a capturing device of the at least one capturing device including: ([0037]: “In the case of the functionalized waveguide for a detector system”. This system contains image capture devices [0151]: “By means of a lens 10 of the detector system 2, the light beams L1-L3 are then focused onto a detector 11 of the detector system 2, such that the desired image of the object 9 can be recorded by means of the detector 11”. These light beams are incident from the surrounding area “an object 9 can be imaged in such a way that light beams L1, L2, L3 emanating from the object 9 enter the plate 6 via the front side 7” [0150]), including: at least two two dimensional carrier mediums, a two-dimensional carrier medium of the at least two-two dimensional carrier mediums configured as a light guide, and on which a coupling- in region and a coupling-out region are disposed ([0037]: “In the case of the functionalized waveguide for a detector system, the input coupling region can comprise at least two volume holograms, each of which deflects only a portion of radiation coming from an object to be detected and impinging on the front side, such that the deflected portion propagates as coupled-in radiation in the base body as far as the output coupling region by means of reflection and impinges on the output coupling region. The volume holograms of the input coupling region can differ in that their deflection function comprises different spectral angular properties. As a result, different wavelengths can be deflected for the same angle of incidence. The output coupling region deflects at least one portion of the coupled-in radiation impinging on it, such that the deflected portion emerges from the base body (preferably via the front or rear side) in order to impinge on the detector system”(Emphasis added). Using multiple detection systems also described ([0307]: “It is also possible to integrate a plurality of detection systems in different windows of a vehicle. In this way, it is possible to determine the position as in the case of a three-dimensional coordinate system of persons and objects in space (keyword: tomography and thus measurement from a plurality of perspectives”. These detection systems contain the two-dimension carrier mediums to multiple system have multiple carrier mediums), wherein the coupling-in region is configured as a holographic element including a first deflection structure configured to couple light that is incident from a surrounding area of the two-dimensional carrier medium on the first deflection structure into the two-dimensional carrier medium (Fig. 1, [0150]: “By means of the camera 3, an object 9 can be imaged in such a way that light beams L1, L2, L3 emanating from the object 9 enter the plate 6 via the front side 7 and are deflected by the input coupling region 4 such that they impinge on the front side 7 at an angle such that total internal reflection takes place”. This coupling in region contains holographic elements that act as deflective structures [0037]: “In the case of the functionalized waveguide for a detector system, the input coupling region can comprise at least two volume holograms each of which deflects only a portion of radiation coming from an object to be detected and impinging on the front side”), the two-dimensional carrier medium is configured to transmit the light coupled into the two-dimensional carrier medium from the coupling-in region by internal reflection to the coupling-out region (Fig. 1, [0150]: “The light beams L1, L2 and L3 are thus guided as far as the output coupling region 5 by means of total internal reflection at the front side 7 and rear side 8”. The plate acts as the two-dimensional carrier medium guiding the light from the coupling in region to the coupling out region by internal reflection), and the coupling-out region is configured as a holographic element including a second deflection structure configured to couple light transmitted to the coupling-out region through the two-dimensional carrier medium that is incident on the second deflection structure, out of the two-dimensional carrier medium (Fig. 1, [0150]: “said output coupling region bringing about a deflection in a direction toward the front side 7, such that the light beams L1-L3 emerge from the plate via the front side 7. The light beams L1-L3 thus propagate in the waveguide 1 along a first direction R1 (here y-direction) from the input coupling to the output coupling region 4, 5. By means of a lens 10 of the detector system 2, the light beams L1-L3 are then focused onto a detector 11 of the detector system 2, such that the desired image of the object 9 can be recorded by means of the detector 11”. This coupling out region also embodied as holographic elements [0016]: “the input coupling region and the output coupling region are embodied as diffractive structures (e.g. as volume holograms)”); at least two image capturing devices from among an image sensor or a camera, which correspond to the capturing device, as corresponding image capturing devices, a plurality of capturing devices among the at least one capturing device disposed at different positions in the vehicle interior and configured to capture, from respective coupling-out regions of the at least two two- dimensional carrier mediums, light that is incident from the surrounding area and coupled in the at least two two-dimensional carrier mediums and coupled out of the at least two two- dimensional carrier mediums, to provide the light in a form of respective image data of the corresponding image capturing devices which correlates with the light coupled out of the at least two two-dimensional carrier mediums ([0151]: “By means of a lens 10 of the detector system 2, the light beams L1-L3 are then focused onto a detector 11 of the detector system 2, such that the desired image of the object 9 can be recorded by means of the detector 11”. These light beams are incident from the surrounding area “an object 9 can be imaged in such a way that light beams L1, L2, L3 emanating from the object 9 enter the plate 6 via the front side 7” [0150]. Using multiple detection systems disposed at different angles for 3D positioning is also described ([0307]: “It is also possible to integrate a plurality of detection systems in different windows of a vehicle. In this way, it is possible to determine the position as in the case of a three-dimensional coordinate system of persons and objects in space (keyword: tomography and thus measurement from a plurality of perspectives”); and a processor configured to capture, based on the respective image data of the corresponding image capturing devices, an object in the surrounding area in the vehicle interior from different viewing angles based on the different positions of the corresponding image capturing devices ([0073]: Position of the object is detected, “They can be used to detect e.g. the position of a person or of an object within the vehicle”. Multiple viewing angles from the detection system disposed in different windows is described, [0307]: “It is also possible to integrate a plurality of detection systems in different windows of a vehicle. In this way, it is possible to determine the position as in the case of a three-dimensional coordinate system of persons and objects in space (keyword: tomography and thus measurement from a plurality of perspectives”. Using the image data to help identify fatigue, gestures, or the individuals in the car is also described, “With the image sequences thus obtained, in combination with corresponding data processing, it is possible to implement further safety systems such as e.g. fatigue recognition or gesture control. Identification of the driver and/or occupants without a visible opening for a camera is also possible in this way” [0305]), KLEINDIENST does not expressly disclose and ascertain a material from which the object is made and/or a temperature of the object and/or surface properties of the object, and/or capture a face of a driver of the motor vehicle in the form of the respective image data of the corresponding image capturing devices, recognize the captured face of the driver based on an object recognition criterion and/or assign the captured face of the driver to a stored profile of the driver, so that the driver is identifiable. However, DALAL teaches and ascertain a material from which the object is made ([0014]: “Methods are provided herein for dynamically determining a threshold reflectance value which is used, in accordance with various embodiments hereof, to isolate pixels in the image which are categorized as human skin from pixels of other materials detected in the vehicle's interior and then determine the number of human occupants in the vehicle's passenger compartment”. The and/or language means the list is disjunctive (list consisting ascertain a material from which the object is made, a temperature of the object, surface properties of the object, and capture a face of a driver), which means only one of the options need be present to meet a prima facia case of rejection under the broadest reasonable interpretation (the option cited for DALAL is the process of ascertaining a material from which the object is made)). At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify KLEINDIENST’s object detection system to include DALAL’s ability to determine material of objects in the vehicle because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically DALAL’s ability to determine material of objects in the vehicle permits detection of people in the car by detection of reflectance values indicative of human skin. This known benefit in DALAL is applicable to KLEINDIENST’s object detection system as they both share characteristics and capabilities, namely, they are directed to recognition of objects in vehicles. Therefore, it would have been recognized that modifying KLEINDIENST’s object detection system to include specifically DALAL’s ability to determine material of objects in the vehicle because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate specifically DALAL’s ability to determine material of objects in the vehicle in recognition of objects in vehicles and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art. The combination of KLEINDIENST and DALAL does not expressly disclose wherein at least a first capturing device among the at least one capturing device is positioned in at least one position among positions including: a rear-view mirror in the vehicle interior, a center console in the vehicle interior, or an instrument cluster in the vehicle interior. However, Wieczorek teaches placing a camera among multiple camera at least one position, the positions including: a rear-view mirror in the vehicle interior, ([0067]: “In an embodiment of the object detection device at least the sensor of the camera system is integrated into the rearview mirror for receiving electromagnetic radiation through a mirror element of the rearview mirror”. This system can contain multiple cameras “In accordance with the invention, a method for operating an object detection device for the interior of a motor vehicle, which in particular comprises a plurality of camera systems and/or lighting devices, is supplied, in which for the detection of at least a first input treatment of a user, an object localization and an object detection is carried out so that the object is detected by the object detection device despite distance and/or obscuration” [0049]. Due to the “or” language this list is disjunctive, which means only one of the options need be present to meet a prima facia case of rejection under the broadest reasonable interpretation (the option cited for Wieczorek is the location of the camera being in the rear-view mirror of the vehicle interior))). At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify the combination of KLEINDIENST and DALAL’s camera positions to include Wieczorek’s camera position in the rear view mirror in the vehicle interior because such a modification is based on the use of known techniques to improve similar devices in the same way. More specifically, Wieczorek’s vehicle interior cameras are comparable to the combination of KLEINDIENST and DALAL’s interior vehicle cameras because both are multi-camera system in the interior of vehicles used for object detection. The combination of KLEINDIENST and DALAL is silent about a position of one of their cameras being in the rear view mirror of the vehicle. Wieczorek’s teaches that in a multi-camera system in a vehicle interior for object detection can use the rear view mirror as a position for one of the cameras and result in the object being detected. Therefore, it would be obvious to one of ordinary skill in the art to use the camera position in the rear view mirror for one of the cameras in the combination of KLEINDIENST and DALAL’s camera positions to detect objects in the interior of vehicles, as taught by Rashidi. Regarding claim 11, the combination of KLEINDIENST, DALAL, and Wieczorek teaches the Motor vehicle as claimed claim 19, in addition, KLEINDIENST further teaches wherein the coupling-in region includes at least one optical grating as the first deflection structure ([0247]: “Other wavelengths emanating from the same object point are deflected into other angles in the waveguide 1 by the input coupling grating 20” (emphasis added), and the coupling-out region includes at least one optical grating as the second deflection structure ([0173]: “This can be altered by implementing an additional deflection function (such as e.g. of a prism, of a tilted mirror, of a linear grating, etc.) in the output coupling region 5”(Emphasis added)). Regarding claim 13, the combination of KLEINDIENST, DALAL, and Wieczorek teaches the motor vehicle as claimed claim 19, in addition, KLEINDIENST further teaches wherein the two- dimensional carrier medium is configured as a transparent plate, film, or lacquer ([0212]: “The invention provides a thin, transparent, switchable holographic waveguide”). Regarding claim 14, the combination of KLEINDIENST, DALAL, and Wieczorek teaches the motor vehicle as claimed claim 19, in addition, KLEINDIENST further teaches wherein the processor is further configured to determine object data describing a recognized object ([0307]: “It is also possible to integrate a plurality of detection systems in different windows of a vehicle. In this way, it is possible to determine the position as in the case of a three-dimensional coordinate system of persons and objects in space (keyword: tomography and thus measurement from a plurality of perspectives”. The object data is the position of the object), and the object data includes a spatial position of the recognized object in relation to a reference point of the object recognition system and/or a three-dimensional shape of the object ([0307]: “It is also possible to integrate a plurality of detection systems in different windows of a vehicle. In this way, it is possible to determine the position as in the case of a three-dimensional coordinate system of persons and objects in space (keyword: tomography and thus measurement from a plurality of perspectives”) Regarding claim 15, the combination of KLEINDIENST, DALAL, and Wieczorek teaches the motor vehicle as claimed claim 19, in addition, KLEINDIENST further teaches wherein the processor is further configured to determine the object data describing the recognized object by photogrammetry (In [0029] of the specification of the subject application, “photogrammetry” is described as “with the aid of a measurement method and an associated evaluation method of remote sensing designed to determine the spatial position or three-dimensional shape of an object from imaged representations, that is to say image data.” [0307] of KLEINDIENST discloses “measurement from a plurality of perspectives”). Regarding claim 16, the combination of KLEINDIENST, DALAL, and Wieczorek teaches the motor vehicle as claimed claim 19, in addition, KLEINDIENST further teaches further comprising: a light source configured to emit a light signal into the surrounding area ([0268]: “For this purpose, the light path in the waveguide 1 is used in the opposite direction and a static or dynamic light source”), wherein the light signal emitted by the light source is coupled into the two-dimensional carrier medium through a light signal coupling-in region, then guided through the two-dimensional carrier medium by internal reflection, and output to the surrounding area at a light signal coupling-out region ([0268]: “For this purpose, the light path in the waveguide 1 is used in the opposite direction and a static or dynamic light source (or a correspondingly luminous image source) is used instead of the detector. Consequently, the previous output coupling region becomes the input coupling region 4, and the previous input coupling region becomes the output coupling region 5, as is shown in FIGS. 35, 36 and 37”), and the processor is configured to effect, based on the light signal emitted by the light source and output to the surrounding area, an improvement an image data of the respective image data and/or the object (This limitation recites an intended effect of the claimed evaluation device, and such intended effects are not given patentable weight (MPEP 2111.04); nonetheless, KLEINDIENST light source is capable of providing this intended effect. [0304]: “In this way, the exterior and/or interior region of the vehicle can be illuminated in a targeted manner in order to ensure a reliable detection even under poor lighting conditions”. Given the use of structured light for 3D object positioning, the image processing apparatus is capable of improving the image data and the object recognition with light). Regarding claim 18, the combination of KLEINDIENST, DALAL, and Wieczorek teaches the motor vehicle as claimed claim 19, in addition, KLEINDIENST further teaches wherein the coupling-in region and the coupling-out region are formed as one piece with the two- dimensional carrier medium (see citation below), or the two-dimensional carrier medium is formed as a separate element from the coupling- in region and the coupling-out region (Fig. 1-3, [0148]: “For this purpose, the waveguide 1 comprises an input coupling region 4 and an output coupling region 5 spaced apart therefrom” Figs. 1-3 shows all these aspects in one piece. Since “or” is used in the claim language only 1 of the limitations must be met). Regarding claim 20, the combination of KLEINDIENST, DALAL, and Wieczorek teaches the motor vehicle as claimed claim 19, in addition, KLEINDIENST further teaches wherein at least a second capturing device among the at least one capturing device is positioned in at least one position among positions including: in a screen of a display apparatus in the vehicle interior, in a dashboard in the vehicle interior, in a windshield of the motor vehicle, in a side window of the motor vehicle, in a roof window of the motor vehicle, or between two A pillars of the motor vehicle ([0307]: “It is also possible to integrate a plurality of detection systems in different windows of a vehicle. In this way, it is possible to determine the position as in the case of a three-dimensional coordinate system of persons and objects in space (keyword: tomography and thus measurement from a plurality of perspectives)”. Figs. 46-47 show different window implementations including the side window and windshield. Regarding claim 22, the combination of KLEINDIENST, DALAL, and Wieczorek teaches the motor vehicle as claimed claim 19, in addition, KLEINDIENST further teaches wherein an image capturing device of the corresponding image capturing devices is oriented toward a volume of the vehicle interior and/or toward a surrounding area of the motor vehicle ([0304]: “In this way, the exterior and/or interior region of the vehicle can be illuminated in a targeted manner in order to ensure a reliable detection even under poor lighting conditions”. This detection is handled by a camera which is oriented towards the inside of a vehicle to detect he object ([0150]: “By means of the camera 3, an object 9 can be imaged in such a way that light beams L1, L2, L3 emanating from the object 9 enter the plate 6 via the front side 7 and are deflected by the input coupling region 4 such that they impinge on the front side 7 at an angle such that total internal reflection takes place”). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over KLEINDIENST et al.( US 20220113551 A1 Hereinafter “KLEINDIENST”) in view of DALAL et al. (US 20130141574 A1 Hereinafter “DALAL”) in further view of Wieczorek et al. (US 20220172489 A1 Hereinafter “Wieczorek”) in further view of Erler ( US 20190187465 A1 Hereinafter “Erler”). Regarding claim 12, the combination of KLEINDIENST, DALAL, and Wieczorek teaches the motor vehicle as claimed claim 11, in addition, KLEINDIENST further teaches wherein the at least one optical grating of the coupling-in region ([0247]: “Other wavelengths emanating from the same object point are deflected into other angles in the waveguide 1 by the input coupling grating 20” (emphasis added), the at least one optical grating of the coupling-out region ([0173]: “This can be altered by implementing an additional deflection function (such as e.g. of a prism, of a tilted mirror, of a linear grating, etc.) in the output coupling region 5”). The combination of KLEINDIENST, DALAL, and Wieczorek does not expressly disclose a surface holographic grating or a volume holographic grating for both the coupling-in and coupling-out regions. However, Erler teaches a first surface holographic grating or a first volume holographic grating for both the coupling-in and coupling-out regions ([0015]: “Preferably, the first diffractive structure comprises a first volume holographic grating, and the second diffractive structure comprises a second volume holographic grating”. These grating are also for a waveguide, “The waveguide 10 comprises in particular a volume holographic input coupling grating 10, a volume holographic expander grating 12 for beam expansion, and a volume holographic output coupling grating 13” [0059]). At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to substitute the combination of KLEINDIENST, DALAL, and Wieczorek’s waveguide gratings with Erler’s waveguide gratings because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, KLEINDIENST, DALAL, and Wieczorek’s waveguide gratings and Erler’s waveguide gratings perform the same general and predictable function, the predictable function being a deflective structure for a waveguide. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself - that is in the substitution of the combination of KLEINDIENST, DALAL, and Wieczorek’s waveguide gratings by replacing it with Erler’s waveguide gratings. Replacing the combinations grating with Erler’s waveguide grating would result in a first and second volume holographic grating since the combination have a first and second grating. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over KLEINDIENST et al.( US 20220113551 A1 Hereinafter “KLEINDIENST”) in view of DALAL et al. (US 20130141574 A1 Hereinafter “DALAL”) in further view of Wieczorek et al. (US 20220172489 A1 Hereinafter “Wieczorek”) in further view of TAMURA ( US 20210084212 A1 Hereinafter “TAMURA”). Regarding claim 17, the combination of KLEINDIENST, DALAL, and Wieczorek teaches the motor vehicle as claimed claim 16, in addition, KLEINDIENST further teaches wherein the light signal ([0276] LEDs, lasers, etc. can be used as light source 32”). KLEINDIENST is assumed to use an infrared light source since the waveguide is configured to be able to receive infrared light ([0013]: “the transparent base body can be transparent to radiation or light from the visible wavelength range. Furthermore, a transparency to the near infrared and/or the infrared range can be present”) and the detector system also using infrared light ([303]: “Furthermore, there is also the possibility of coupling in radiation outside the visual spectral range, for example radiation from the near infrared. With the use of a correspondingly suitable detector system, it is thus possible to acquire image information under illumination conditions that are poor for human beings”), still, KLEINDIENST does not expressly disclose an infrared light source. However, TAMURA teaches ([0018]: “an imaging unit for imaging an inside of the vehicle irradiated by the infrared light distributed by the first light distribution member”). At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to substitute the combination of KLEINDIENST, DALAL, and Wieczorek’s light source with TAMURA’s Infrared light source because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, KLEINDIENST, DALAL, and Wieczorek’s light source and TAMURA’s Infrared light source perform the same general and predictable function, the predictable function being illuminating the inside of a vehicle for object detection. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself - that is in the substitution of the combination of KLEINDIENST, DALAL, and Wieczorek’s light source by replacing it with TAMURA’s Infrared light source. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Nemat-Nasser et al. (US 20130073114 A1) teaches capturing face details of a driver with trained NN and matching them to stored profiles Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEFANO A DARDANO whose telephone number is (703)756-4543. The examiner can normally be reached Monday - Friday 11:00 - 7:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Greg Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEFANO ANTHONY DARDANO/ Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Nov 02, 2021
Application Filed
Feb 20, 2024
Non-Final Rejection — §103
May 17, 2024
Examiner Interview Summary
May 17, 2024
Applicant Interview (Telephonic)
May 21, 2024
Response Filed
Jun 27, 2024
Final Rejection — §103
Sep 09, 2024
Response after Non-Final Action
Sep 18, 2024
Response after Non-Final Action
Oct 02, 2024
Request for Continued Examination
Oct 08, 2024
Response after Non-Final Action
Nov 04, 2024
Non-Final Rejection — §103
Jan 22, 2025
Response Filed
Mar 24, 2025
Final Rejection — §103
May 28, 2025
Response after Non-Final Action
Jul 01, 2025
Request for Continued Examination
Jul 02, 2025
Response after Non-Final Action
Aug 04, 2025
Non-Final Rejection — §103
Sep 22, 2025
Response Filed
Oct 29, 2025
Final Rejection — §103
Jan 06, 2026
Response after Non-Final Action
Feb 06, 2026
Request for Continued Examination
Feb 25, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586207
SYSTEM AND METHOD FOR AI SEGMENTATION-BASED REGISTRATION FOR MULTI-FRAME PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12573227
METHOD AND SYSTEM FOR EXTRACTION OF DATA FROM DOCUMENTS FOR ROBOTIC PROCESS AUTOMATION
2y 5m to grant Granted Mar 10, 2026
Patent 12573030
PROCESSING OF TRACTOGRAPHY RESULTS USING AN AUTOENCODER
2y 5m to grant Granted Mar 10, 2026
Patent 12548353
IMAGE PROCESSING APPARATUS SUPPORTING OBSERVATION OF OBJECT USING MICROSCOPE, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR
2y 5m to grant Granted Feb 10, 2026
Patent 12536689
MINING UNLABELED IMAGES WITH VISION AND LANGUAGE MODELS FOR IMPROVING OBJECT DETECTION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+33.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 74 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month