Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Allowable Subject Matter
Interview was conducted on 03/23/2026 with Aaron Cunningham (summary attached). It was suggested that the language and concepts in Claim 25 be included in the independent claims, bringing the claims to an allowable condition. Examiner contacted the Attorney on 03/26/2026. The attorney said that he would have the revisions by Tuesday 03/31/2026 but was having some issues with the client. Examiner phoned the attorney this week, with not reply (sorry if issues with family or anything else) and left messages. I am sending this Final due to deadlines within my docket.
Status of Claims
Claims 2, 3, 10, 19, and 24-26 have been amended.
Claims 2-13, 15, 16, 18, 19, 21, and 24-26 are pending.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Arguments
Applicant’s arguments with respect to claims 2-13, 15, 16, 18, 19, 21, and 24-26 have been considered but are moot in view of the new ground(s) of rejection as necessitated by applicant's amendments.
Claim Rejections - 35 USC§ 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed
invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 2-7, 9-13, 15-16, 19, 21-22, 24 are rejected under 35 U.S.C. 103 (a) as unpatentable over Arndt et al.[US20160012301, now Arndt], in view of Ogale et al. [US20190034794, now Ogale].
In regard to Independent claim 2, 10, and 19 Arndt teaches a method and system and processing unit comprising: receiving sensor data obtained using one or more sensors of a machine, the sensor data representative of a pedestrian located outside of the machine; (See at least Arndt, Para 30, receiving via a camera (sensor) data representative of a pedestrian). (See at least “pedestrian detection” Para 49-57).
determining, based at least on the sensor data, a gesture being made by the pedestrian; (See Para 52, using neural network and gestures (See at least Arndt, Para 33-37, 45-46);
apply, to one or more neural networks, data representative of the gesture; (See at least Arndt, Para 37), the classification module and image analysis module 5 and 6 perform analysis of the pedestrian and based on the recognized gesture cause the car to in some cases perform an action. (Para 37). For example, highlight the person in the car in the image Or that the person wishes to be picked up and then directs the driver assist to change a lane or brake (Para 37-38). Moreover, a traffic guard at a school stop waiving a traffic paddle is gesture that causes the car to perform an action (e.g. stop) (Para 39). (See also man waiving his arm while holding a sign and standing in the road which conveys a safety message (Para 69-82). As stated in Arndt, the system can perform the action of notification, warning (audible or visual) or driver assistance which encompasses a number of actions for the vehicle to perform.(Para 52) e.g. in the form of a neural network, could improve the decision logic for uncertain or inconclusive situations.
Arndt does not specifically disclose but Ogale teaches determining, using one or more neural networks and based at least on the data representative of the gesture, trajectory for the machine that would avoid a collision with one or more objects at least the pedestrian; and causing the machine to navigate according to the trajectory.
However, Ogale teaches at least one sensor data is sent through a neural network to determine if a collision will happen along the path of a vehicle. (See at least Ogale, Para 9). Ogale is analogous art to Arndt autonomous vehicles and processing sensor data. (Para 9). Ogale teaches a trajectory planning neural network that uses a first neural network and a second neural network where the second neural network uses environmental data of the vehicle (e.g. camera data in the vicinity of a vehicle and traffic artifacts about a vehicle; Para 17). The output of the first and second neural networks can cause a vehicle to take an action or perform an action (para 21, such as braking, accelerating or steering). (Para 22). As stated in Ogale, the second neural network can process object data of objects in the vicinity of the vehicle, camera data of an optical image in vicinity of the vehicle and plan a trajectory (para 28) and identify a driving scenario such as a collision (Para 37- 38).The planned trajectory can be improved to select waypoints to navigate that improves the safety and comfort of the passenger. (Para 49). Ogale teaches the sensor data in the second neural network can include data of pedestrians and indicate these sections to avoid for collision risk purposes (Para 71). (See also Fig. 3). Ogale teaches the planning system can, if a collision is imminent, can override a trajectory (para 79).
Accordingly it would have been obvious to the skilled artisan prior to the effective date of the invention having the teachings of Arndt and Ogale inf rant of them to modify the sensors and neural network of Arndt, with a reasonable degree of success to provide a second neural network that processes trajectory information to avoid collisions with pedestrians. The motivation to combine Arndt with Ogale comes from Ogale which suggests using the second neural network to process environmental data that includes pedestrian information to generate trajectory information that allows the vehicle to cross an intersection in a safe, legal and comfortable manner and avoid trajectories that indicate a collision will occur (Para 71-72,79).
With respect to dependent claims 3 and 11, Arndt teaches the method and system further comprising: determining, based at least on the gesture, an intent associated with the pedestrian, wherein the determining the trajectory is further based at least on the intent.. (See at least Arndt 7, intention 51 and (Para 33-37, 45-46). (See e.g. stop vehicle based on policeman gesture or audible gestures or hitchhiker finger or crossing guard causing vehicle to slow or stop or any of the gestures in figure 4a-7 and table 1, para 68).
With respect to dependent claims 4 and 12, Arndt teaches the method and system wherein the gesture is associated with one or more of: causing the machine to continue navigating; causing the machine to stop; or causing the machine to navigate to a position associated with the pedestrian. (See at least Arndt, para 7, intention 51 and (See 33-37, 45-46). (See at least Arndt, e.g. stop vehicle based on policeman gesture or audible gestures or hitchhiker finger or crossing guard causing vehicle to slow or stop or any of the gestures in figure 4a-7 and table 1, para 68).
With respect to dependent claims 5 and 13, Arndt teaches the method and system further comprising: determining, based at least on the sensor data, that the pedestrian includes personnel affiliated with one or more of law enforcement, fire protection, emergency services, or a crossing guard, wherein the causing the machine to navigate according to the trajectory is further based at least on the pedestrian including the personnel corresponding to the one or more of law enforcement, fire protection, emergency services,, or a crossing guard. (See at least Arndt, abstract determining the pedestrian is a policeman or crossing guard or cyclist (See Para 3-4, 10, 47-51, 60).
With respect to dependent claim 6 Arndt teaches the method further comprising: determining, based at least on the sensor data, that the pedestrian is associated with a vehicle detected in an environment corresponding to the machine and represented at least partially in the sensor data, wherein the causing the machine to navigate according to the trajectory is further based at least on the pedestrian being associated with the vehicle.. (See at least Arndt, Para 10, customers associated with the taxi (See para 7, intention 51 and (See 33-37, 45-46). (See e.g. stop vehicle based on policeman gesture or audible gestures or hitchhiker finger or crossing guard causing vehicle to slow or stop or any of the gestures in figure 4a-7 and table 1, para 68).
With respect to dependent claims 7 and 15, Arndt teaches the method and system further comprising causing, using one or more output devices associated with the machine, an alert associated with the gesture being made by the pedestrian. (See at least Arndt, Para 12, alert, Para 83).
With respect to dependent claim 9 Arndt teaches the method wherein the gesture is associated with at least one of: a motion of a portion of the pedestrian; or a motion of an item that is in possession of the pedestrian. (See para 7, intention 51 and (See at least Arndt, ¶ 33-37, 45- 46). (See e.g. stop vehicle based on policeman gesture or audible gestures or hitchhiker finger or crossing guard causing vehicle to slow or stop or any of the gestures in figure 4a-7 and table 1, para 68).
With respect to dependent claims 18 and 21, Arndt teaches the method and system wherein the system and processing unit is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing deep learning operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. (See at least Arndt, semi- autonomous system using a driver assistance para 38-39).
With respect to dependent claim 25, Arndt does not specifically disclose wherein: the action is associated with a trajectory for the machine to navigate; and the machine is caused to navigate along the trajectory. But Arndt does disclose navigating a path (See at least Arndt, para 7, intention 51 and (See 33-37, 45-46). (See e.g. stop vehicle based on policeman gesture or audible gestures or hitchhiker finger or crossing guard causing vehicle to slow or stop or any of the gestures in figure 4a-7 and table 1, para 68).
However, Ogale more specifically teaches the action is associated with a trajectory for the machine to navigate (See at least Ogale, Abstract, para 01).
Claim(s) 24 and 26 is rejected under 35 U.S.C. 103 as being unpatentable over Arndt in view of Ogale as applied to claim 1 above, and further in view of Sinha et al. [US20170168586, now Sinha].
With respect to dependent claim 24, as indicated in the above discussion Arndt in view of Ogale teaches each element of claim 1.
However, Arndt in view of Ogale does not disclose determining the gesture made by the
pedestrian comprises: generating, based at least on the sensor data, data representative of one or more three-dimensional (3D) poses associated with the pedestrian; and determining, based at least on the one or more 3D poses, the gesture made by the pedestrian.
Nonetheless, Sinha is analogous art to Arndt and the present application as Sinha is directed to the problem solving area of hand poses analyzed by a neural network (See at least Sinha, Para 7-8). Sinha teaches determining the 3D processing of the hand pose (Para 4-5) so as to determine intent. (Para 24). Sinha teaches using three dimensional imaging devices and processing the pose of the gestures f ram different angles (Para 29-30, 37). Sinha teaches recognizing the gesture and by comparing to set of hand pose parameters a best fit is determined. (Para 33). Sinha teaches using a combination of neural network outputs for the fingers and wrist and then combines the two to provide a gesture input into the system (Para 35,43). Sinha teaches using a plurality of networks corresponding to each finger on the hand. (Para 43). Each finger can represent a different activation of a feature. (Para 44-45). Sinha teaches processing the gesture in a three dimensional software using various poses from various angles corresponding to a pose of the hand(Para 50).
Accordingly it would have been obvious to the skilled artisan prior to the effective date of the invention having the teachings of Arndt, Sinha and Ogale inf rant of them to modify the sensors and neural network of Arndt, with a reasonable degree of success to provide a second neural network that processes trajectory information to avoid collisions with pedestrians and a neural network that processes 3D gestures. The motivation to combine Arndt with Ogale comes from Ogale which suggests using the second neural network to process environmental data that includes pedestrian information to generate trajectory information that allows the vehicle to cross an intersection in a safe, legal and comfortable manner and avoid trajectories that indicate a collision will occur (Para 71-72,79). The motivation to combine Arndt with Sinha comes from Sinha which suggests capturing three dimensional input of a user's hand or gesture without the use of a glove (Para 4) thereby improving the speed and recognition of hand poses in three dimensional spaces. (Para 63)
With respect to dependent claim 26, as indicated in the above discussion Arndt in view of Ogale teaches each element of claim 10.
However, Arndt in view of Ogale does not disclose/teach wherein: the action is associated with one or more directions being provided by the pedestrian; and the machine is caused to navigate based at least on the one or more direction. Nonetheless, Sinha is analogous art to Arndt and the present application as Sinha is directed to the problem solving area of hand poses analyzed by a neural network (See at least Sinha, Para 7-8). Sinha teaches determining the 3D processing of the hand pose (Para 4-5) so as to determine intent. (Para 24). Sinha teaches using three dimensional imaging devices and processing the pose of the gestures f ram different angles (Para 29-30, 37). Sinha teaches recognizing the gesture and by comparing to set of hand pose parameters a best fit is determined. (Para 33). Sinha teaches using a combination of neural network outputs for the fingers and wrist and then combines the two to provide a gesture input into the system (Para 35,43). Sinha teaches using a plurality of networks corresponding to each finger on the hand. (Para 43). Each finger can represent a different activation of a feature. (Para 44-45). Sinha teaches processing the gesture in a three dimensional software using various poses from various angles corresponding to a pose of the hand(Para 50).
Accordingly it would have been obvious to the skilled artisan prior to the effective date of the invention having the teachings of Arndt, Sinha and Ogale inf rant of them to modify the sensors and neural network of Arndt, with a reasonable degree of success to provide a second neural network that processes trajectory information to avoid collisions with pedestrians and a neural network that processes 3D gestures. The motivation to combine Arndt with Ogale comes from Ogale which suggests using the second neural network to process environmental data that includes pedestrian information to generate trajectory information that allows the vehicle to cross an intersection in a safe, legal and comfortable manner and avoid trajectories that indicate a collision will occur (Para 71-72,79). The motivation to combine Arndt with Sinha comes from Sinha which suggests capturing three dimensional input of a user's hand or gesture without the use of a glove (Para 4) thereby improving the speed and recognition of hand poses in three dimensional spaces. (Para 63)
Claim(s) 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Arndt in view of Ogale in further view of Trinh et al. [US20170334348, now Trinh].
With respect to dependent claims 8 and 16, as indicated in the above discussion Arndt in view of Ogale teaches each element of claim 2, and 10.Arndt teaches tracking head movement or orientation and viewing direction of a person (see at least Arndt, abstract, Para 3, 50). However, Arndt does not specifically disclose the method
further comprising: determining, based at least on second sensor data obtained using one or
more interior sensors of the machine, a gaze direction associated with ad river of the machine; and determining, based at least on the gaze direction, that the pedestrian is located outside of a field-of-view (FOV) of the driver, wherein the one or more operations of the machine are further performed based at least on the pedestrian being located outside of the FOV of the driver.
However, Trinh is analogous art as being directed to the same problem solving area of detecting a pedestrian outside of a vehicle. Trinh captures the gaze direction of the driver and whether they are gazing at either pedestrian 8 or 9, in this case person 8 is in the field of view (See at least Trinh, Para 23). Trinh teaches the system can either flash a light or project a crosswalk to the pedestrian (Para 24-26). The combination of Arndt head tracking system with Trinh gaze tracking of gestures of the driver would result in both the pedestrian and driver being tracked for gestures to be interpreted by the vehicle.
PNG
media_image1.png
28
985
media_image1.png
Greyscale
PNG
media_image2.png
439
492
media_image2.png
Greyscale
Fig. 1
11
�,.c-_:::::.·
--.J_/
[AltContent: ]9 12
2
[AltContent: textbox (10)]3
[AltContent: ][AltContent: textbox (5)]'::v��:J,
,,"\
\
14
Accordingly it would have been obvious to the skilled artisan prior to the effective date of the invention having the teachings of Arndt, Ogale and Trinh in front of them to modify the sensors of Arndt, with a reasonable degree of success, with the gaze sensor of Trinh. The motivation to combine Arndt with Trinh comes from Trinh which suggests tracking a gaze direction of ad river to cause a command to be recognized by the vehicle to let the pedestrian know the driver is looking at them and thereby providing a notification externally to the pedestrian (Para 5).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOAN T GOODBODY whose telephone number is (571) 270-7952. The examiner can normally be reached on M-TH 7-3 (US Eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form.html.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RACHID BENDIDI can be reached at (571) 272-4896. The Fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspot.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from the USPTO Customer Serie Representative or access to the automated information system, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/JOAN T GOODBODY/
Primary Examiner, Art Unit 3664
(571) 270-7952