Prosecution Insights
Last updated: April 19, 2026
Application No. 18/703,323

AUTONOMOUS DRIVING DEVICE FOR PROVIDING VISUALIZED VIRTUAL GUIDE INFORMATION, AND OPERATING METHOD THEREFOR

Final Rejection §103
Filed
Apr 19, 2024
Examiner
DEL VALLE, LUIS GERARDO
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Integrit Inc.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
96%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
111 granted / 154 resolved
+20.1% vs TC avg
Strong +24% interview lift
Without
With
+23.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
30 currently pending
Career history
184
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
60.5%
+20.5% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 154 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Examiner’s Response re: 112b Applicant’s arguments, see Pages 7-8, filed 16 Dec 2025, with respect to Claims 4 and 6 have been fully considered and are persuasive. The 112b Rejection of Claims 4 and 6 has been withdrawn. Examiner’s Response re: 101 Rejection Applicant’s arguments, see Pages 9-12, filed 16 Dec 2025, with respect to Claims 1-6 have been fully considered and are persuasive. The 101 Rejection of Claims 1-6 has been withdrawn. Examiner’s Response re: 103 Rejection Applicant’s arguments, see Pages 12-14, filed 16 Dec 2025, with respect to the rejection(s) of claim(s) 1-6 under 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Shin, Fuji, Tokuchi, Grant, in view of Saboune. Claim Objections Claim 1 is objected to because of the following informalities: The Claim states on the first amendment “…physical information of the user” and this appears as a typo as the limitation ought to read as “…physical information of a user” per the Applicant’s claimed invention. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3 and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Shin et al., US 20200005787 A1 (herein, Shin), in view of Fujinami et al., US 20240034325 A1 (herein, Fuji), Reger US 7860614 B1 (herein, Reger), Tokuchi US 20180104816 A1 (herein, Tokuchi), Grant et al. US 20210211575 A1 (herein, Grant), and in further view of Saboune et al., US 20140267904 A1 (herein, Saboune). Regarding Claim 1, Shin discloses, an autonomous driving device (FIG. 2, illustrates the autonomous driving device) for providing visualized virtual guide information (¶[0057] – “…The user can see the display unit 151 installed on the rear of the guide robot 100 while following the guide robot 100.”), the autonomous driving device comprising: an input unit (FIG. 2, #120 – input unit) configured to recognize and receive a voice and motion of an object (FIG. 9A-B and [0167] – “…, FIG. 9A illustrates a situation where an ‘old man/woman’ said ‘Hello’, and FIG. 9B illustrates a situation where a ‘child’ said ‘Hello’.”); a voice analysis unit ([0047] – “a voice recognition unit) configured to analyze a voice feature value of the object ([0168] – “…identifies the user who has said the greeting, and classifies the user's features.”); an image analysis unit configured to analyze a motion feature value of the object ([0022] – “…the approaching user by activating a camera when the approaching user is detected by the sensor, and extract and classify the user features by analyzing the acquired facial image….”); a determination unit (FIG. 2, #160 – learning data unit) configured to define a meaning of the feature value of the voice and the feature value of the motion ([0092] – “…perform only a function of learning everyday words…”) and then determine feedback information corresponding to the defined meaning ([0093] – “…generate result information in response to input voice (or speech) information, based on data stored in the learning data unit 160..”). wherein the image analysis unit is configured to, after extracting a user body image based on body feature points of the user included in an object image, analyze a motion feature value of the object by comparing shape information of the extracted user body image with learned shape information (¶[0022] – “ In one embodiment, the control unit may acquire a facial image of the approaching user by activating a camera when the approaching user is detected by the sensor, and extract and classify the user features by analyzing the acquired facial image.”) Shin does not disclose, including a three-dimensional camera and an inertial measurement sensor (IMU) configured to acquire depth information and physical motion information of the user (See Above Claim Objection). However, Fuji teaches, including a three-dimensional camera and an inertial measurement sensor (IMU) configured to acquire depth information and physical motion information of the user (¶[0226] – “The recognition unit 111 uses image data, depth image data, IMU data, GNSS data, or the like input from the sensor unit 101 or the like as an input to perform to perform fatigue recognition processing for recognizing the degree of the operator's fatigue.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the autonomous driving device as disclosed by Shin to include a 3D camera and IMU, and user as taught by Fuji. Doing so, provides the autonomous driving device with an additional capability of providing information to the user while operating the vehicle. Modified Shin discloses visual and feedback information (¶[0057] – “…a function of outputting visual information (e.g., route (path) guidance information, query information) related to a currently-provided service…”) but does not disclose, a virtualization unit configured to generate or receive visual information based on augmented reality or virtual reality and output guide information for the feedback information. However, Tokuchi teaches, a virtualization unit (FIG. 2, #28 – display) configured to generate or receive visual information based on augmented reality or virtual reality and output guide information for the feedback information ([0054] – “…display 28, information concerning problems, information concerning solutions, various messages, and so on, are displayed.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the autonomous driving device as disclosed by modified Shin to include a virtualization unit as taught by Tokuchi. Doing so, provides the autonomous driving device with an additional capability that enhances the use of the device thus making it more valuable to the user. Modified Shin discloses the visualization unit but does not disclose, the virtualization unit being further configured to control a physical driving operation or a movement path of the autonomous driving device based on the feedback information. However, Reger teaches, the virtualization unit being further configured to control a physical driving operation or a movement path of the autonomous driving device based on the feedback information (Col. 1, lines 34-45 – “One aspect of the invention is a trainer for training a human to use a physical robot in a physical environment, the physical robot being controlled in the physical environment by an operator control unit, the trainer comprising an input device; a visual display; a computer connected to the input device and the visual display; and computer software disposed in the computer for creating a virtual robot and a virtual environment on the visual display, the virtual robot and the virtual environment being simulations of the physical robot and the physical environment wherein interaction between the virtual robot and the virtual environment simulates interaction between the physical robot and the physical environment.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the autonomous driving device as disclosed by modified Shin to include a physical driving operation as taught by Reger. Doing so, provides the autonomous driving device with an additional capability that enhances the use of the device by providing physical driving operation thus making it more valuable to the user. Modified Shin does not disclose, wherein the image analysis unit is further configured to analyze the motion feature value by normalizing and clustering or classifying the user body image and comparing the normalized and clustered or classified user body image with a plurality of learned object-shape information using a deep neural network algorithm. However, Grant teaches, wherein the image analysis unit is further configured to analyze the motion feature value by normalizing and clustering or classifying the user body image and comparing the normalized and clustered or classified user body image with a plurality of learned object-shape information using a deep neural network algorithm (FIG. 4 and ¶[0059] – “…in the context of step 410 the image analysis application determines contextual details associated with the archived image capture data and/or the collected image profile data by applying at least one NLP machine learning algorithm to inputs derived from parsed textual metadata. Optionally, the image analysis application processes results from application of the at least one NLP algorithm via NLU in order to enhance machine reading comprehension. Additionally or alternatively, the image analysis application classifies image capture data based upon textual analysis by applying NLP, e.g., LDA, to image descriptions (captions) and/or image textual metadata. In another embodiment, the image analysis application applies the at least one machine learning algorithm at step 410 in conjunction with a recurrent neural network architecture configured to store time series pattern data associated with the parsed textual metadata derived from the archived image capture data…”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the autonomous driving device as disclosed by modified Shin to classify the user body image along with its corresponding actions as taught by Grant. Doing so, provides the autonomous driving device with an additional analytical classification and the corresponding deep neural network algorithm and make it available to the user. This information can be utilized to improve the safety of the vehicle by only allowing authorized personnel to enter and operate the same. Modified Shin does not disclose, wherein the determination unit is configured to analyze a motion pattern of the user based on angular velocity information, acceleration information, or altitude information corresponding to the motion of the object, and to define a situation or posture of the user according to the analyzed motion pattern. However, Saboune teaches, wherein the determination unit is configured to analyze a motion pattern of the user based on angular velocity information, acceleration information, or altitude information corresponding to the motion of the object, and to define a situation or posture of the user according to the analyzed motion pattern (¶[0042] – “ In some implementations, control signal generator module 162 may be configured to generate one or more control signals that cause haptic feedback. Control signal generator module 162 may generate control signals responsive to the foregoing events detected by or outputs from image processing module 150, sensor processing module 152, image-based motion estimator module 154, sensor-based motion estimator module 156, data fusion module 158, and/or configuration management module 160. In some implementations, control signal generator 162 may generate control signals based on the particular configuration (e.g., number, type, placement, etc.) of one or more haptic output devices 143 that generate haptic feedback responsive to the control signals. In this manner, computing device 140 facilitates various types of haptic feedback responsive to different types of events, characteristics of events (e.g., estimated speed of an object appearing in a given scene), and/or user preferences/configurations.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the autonomous driving device as disclosed by modified Shin to analysis of the motion base on acceleration according to the user as taught by Saboune. Doing so, provides the autonomous driving device with the capability to analyze motion patterns based on the acceleration according to the user. This in turn provides the user with better information so as to make decisions based on the current situation. Regarding Claim 2, modified Shin further discloses, further comprising an autonomous driving unit (FIG. 1, #100 – guide robot) configured to travel in a preset zone (¶[0056] – “…preset path…”) or place on the basis of autonomous driving information set by an external terminal (¶[0061] – “…external terminal…”). Regarding Claim 3, modified Shin further discloses, wherein the autonomous driving unit moves to a location of the object ([0045] – “Also, “guide robot” disclosed herein may refer to a robot capable of performing autonomous travel in order to guide a user to a road, a specific place, and the like,”) when a specific sound or specific motion of the object is detected ([¶[0046] – “In addition, “guide robot” disclosed herein can perform interaction and movement through continuous conversations, as well as using screen, voice, and LED, in order to provide various information and guidance to a user.”). Regarding Claim 5, modified Shin further discloses, wherein the guide information for the feedback information is received from an external server using the visual information based on augmented reality (¶[0061] – “…an external server, for example, an artificial intelligence (AI) server or an external terminal…”) or virtual reality and updated. Regarding Claim 6, please see the rejections for Claims 1 and 5 as the limitations correspond to what is claimed in Claim 6. Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Shin et al., US 20200005787 A1 (herein, Shin) in view of Fuji, Reger, Tokuchi US 20180104816 A1 (herein, Tokuchi), Grant, Saboune, and in further view of Gillett US 20170190335 A1 (herein, Gillett). Regarding Claim 4, modified Shin further discloses, further comprising a sensor unit (¶[0061] – ‘at least one module…”) including a location sensor (FIG. 2, #110 – communication unit) for transmitting or receiving one or more of a Wi-Fi signal (¶[0062] – “…communication unit 110 may perform communications with an artificial intelligence (AI) server and the like by using wireless Internet communication technologies, such as Wireless LAN (WLAN), Wireless Fidelity (Wi-Fi), Wi-Fi Direct, Digital Living Network Alliance (DLNA), Wireless Broadband (WiBro),…”), a wireless display (WiDi) signal, an ultra-wideband (UWB) signal, a Bluetooth signal, an infrared signal, an ultraviolet signal, an ultrasonic signal, a Global Positioning system (GPS) signal, and a 17 wireless signal which performs a similar function thereto, but does not disclose and a motion sensor for measure inertia, including angular velocity and acceleration, and to measure altitude or other movement of the object, wherein the motion sensor is configured to directly measure the altitude, measure and atmospheric pressure, or measure movement of the object using an angle, a velocity, the acceleration, or the angular velocity. However, Gillett teaches, and a motion sensor for measuring inertia, including angular velocity and an acceleration, altitude, and the like according to movement of the object or measuring other object movement ([0105] – “…sensors units 703 which may also include: a IMU, altitude sensor, gravity sensor, as disclosed, the aforementioned elements and those aforementioned sensor unit's employing sensor signals 725.”), wherein the motion sensor is configured to directly measure the altitude, measure and atmospheric pressure, or measure movement of the object using an angle, a velocity, the acceleration, or the angular velocity (Claim 18 – “…an accelerometer sensor, IMU, altitude sensor, gravity sensor, or the like, a steering column configured with a prewired stem containing USB power cable…”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the autonomous driving device as disclosed by modified Shin to include sensors measuring inertia, altitude, acceleration, velocity by the accelerometer as taught by Gillett. Doing so, provides the autonomous driving device with an additional capability that enhances the use of the device thus making it more valuable to the user due to the increase of variables that are measured per the sensors. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUIS G DEL VALLE whose telephone number is (303)297-4313. The examiner can normally be reached Monday-Friday, 0730 - 1630 MST. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Antonucci can be reached at (313) 446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LUIS G DEL VALLE/Examiner, Art Unit 3666 /ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666
Read full office action

Prosecution Timeline

Apr 19, 2024
Application Filed
Sep 09, 2025
Non-Final Rejection — §103
Dec 08, 2025
Interview Requested
Dec 15, 2025
Examiner Interview Summary
Dec 15, 2025
Applicant Interview (Telephonic)
Dec 16, 2025
Response Filed
Feb 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597040
SHARED CHECKLISTS FOR ONBOARD ASSISTANT
2y 5m to grant Granted Apr 07, 2026
Patent 12596010
DISPLAY DEVICE, DISPLAY METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12592151
SYSTEM AND METHOD FOR MULTI-IMAGE-BASED VESSEL PROXIMITY SITUATION RECOGNITION SUPPORT
2y 5m to grant Granted Mar 31, 2026
Patent 12570325
VEHICLE MOVING METHOD AND VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12546615
SYSTEMS AND METHODS FOR PREDICTING FUEL CONSUMPTION EFFICIENCY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
96%
With Interview (+23.8%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 154 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month