DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/10/2025 has been entered.
Claims 1, 4,5,7,8,11,12 and 14-20 are amended. Claims 1-20 are presently pending and examined.
Response to Arguments
112 Rejection
Applicant’s amendments and accompanying arguments, see remarks, filed 11/10/2025, with respect to 112 rejections of Claim 1, 8 and 15 have been fully considered and are persuasive. The 112 rejection has been withdrawn.
101 Rejection
Applicant’s amendments and accompanying arguments, see remarks, filed 11/10/2025, with respect to 101 rejections of Claims 15-20 have been fully considered and are persuasive. The 101 rejection of Claims 15-20 has been withdrawn.
Prior Art Rejection
Applicant’s amendments and accompanying arguments, see remarks, filed 11/10/2025, with respect to the rejection(s) of claim(s) 1-20 under 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Scott Mayberry US20200249046 (“Mayberry”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 4-6, 8, 11-13, 15, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Jeremiah Golston et. al. US12084045 (“Golston”) in view of Jungkyun Jung et. al. US20200050858A1 (“Jung”) and Scott Mayberry US20200249046 (“Mayberry”).
As per Claim 1, 8 and 15,
Golston, discloses (see at least Fig. 2 and Fig. 5):
capturing, by sensors in a vehicle, sensor data of objects that were previously bought into the vehicle (See at least [Col. 13, line 23-27] the computer vision analyzer 118 may detect one or more occupant possessions by detecting one or more objects in the image data (e.g., in one or more images, in one or more subsets of the image(s))
determining a destination of the vehicle (see at least [0066] Some configurations may detect that one occupant has an urgent medical situation and may reroute the plan)
execution of a machine learning model on the sensor data of the objects bought into the vehicle (see at least [0030] Some configurations of the systems and methods disclosed herein may involve object detection, object recognition, and/or machine learning, [0065] Additional states that may be determined by machine learning may include objects carried by different occupants and/or medical situations of the occupants, [0069] In some configurations, the processor 112 may include and/or implement a machine learner 140. The machine learner 140 may perform machine learning based on the sensor data in order to determine and/or refine the occupant status determination.
Golston does not disclose,
capturing, by sensors in a vehicle, sensor data of objects that were previously bought into the vehicle
Analyzing the sensor data to identify times when the objects were previously bought into the vehicle and an activity associated with the objects
determining a destination of the vehicle, based on execution of a machine learning model on the activity and the times when the objects were previously bought into the vehicle and
automatically generating a route for the vehicle, to the determined destination
controlling the vehicle to autonomously maneuver itself to the determined destination via the generated route.
Jung teaches,
automatically generating a route for the vehicle, to the determined destination (see at least Fig. 16, Step 163, S164, [0062] autonomous driving may include all of a technology of maintaining the lane in which a vehicle is driving, a technology of automatically adjusting a vehicle speed such as adaptive cruise control, a technology of causing a vehicle to automatically drive along a given route, and a technology of automatically setting a route, along which a vehicle drives, when a destination is set, and [0114] Autonomous driving vehicle 100 b may determine a movement route and a driving plan using at least one of map data, object information detected from sensor information)
controlling the vehicle to autonomously maneuver itself to the determined destination via the generated route (see at least Fig. 16, S164, [0062] autonomous driving may include all of a technology of maintaining the lane in which a vehicle is driving, a technology of automatically adjusting a vehicle speed such as adaptive cruise control, a technology of causing a vehicle to automatically drive along a given route, and a technology of automatically setting a route, along which a vehicle drives, when a destination is set.)
Thus, Golston discloses identifying objects in a vehicle and using machine learning on the sensor data of objects and rerouting of destination. Jung teaches automatically generating a route and controlling the vehicle autonomously.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Golston with autonomously driving the vehicle based on identification of an object in the vehicle, to transfer an item in a vehicle to a user via autonomous driving based on identified information on the item and user information (0008).
Mayberry teaches,
capturing, by sensors in a vehicle, sensor data of objects that were previously bought into the vehicle (see at least [0021] A single image captured by one of the various imaging systems or multiple images captured by two or more of the imaging systems may be processed by the destination prediction module 136 in the computer system 130 for identifying a category of the attire worn by the individual 140, and [0023] In some other implementations, the various categories may be characterized using historical data 137. The historical data 137 may include data derived from images captured by the imaging system 122, the imaging system 124, and/or the imaging system in the navigation assistance equipment 121)
Analyzing the sensor data to identify times when the objects were previously bought into the vehicle and an activity associated with the objects (see at least [0012] The historical data may include a record of times at which the occupant previously traveled to a particular destination and the attire worn by the occupant when traveling to the destination, and [0026] The destination prediction module 136 may use the historical data 137 and/or the supplementary data 138 stored in the memory 132, for predicting a travel destination of the autonomous vehicle 120.
determining a destination of the vehicle, based on execution of a machine learning model on the activity and the times when the objects were previously bought into the vehicle (see at least [0012] The travel destination may be additionally predicted based on other factors such as an attire worn by a co-occupant of the automobile and historical data associated with the occupant and/or the co-occupant. The historical data may include a record of times at which the occupant previously traveled to a particular destination and the attire worn by the occupant when traveling to the destination, [0039] The travel destination may also be predicted based on other factors such as an attire worn by a co-occupant of the automobile, a time of travel, past history (historical data), and/or supplementary data, and [0044] The travel destination may also be predicted based on other factors such as an attire worn by a co-occupant of the automobile, a time of travel, past history (historical data), and/or supplementary data)
Thus, Golston discloses identifying objects in a vehicle and using machine learning on the sensor data of objects and rerouting of destination. Mayberry teaches predicting travel destination of an automobile based on attire worn my individual.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Golston with prediction system as taught by Mayberry to predict a travel destination of an automobile, based at least in part on identifying a category of an attire worn by an individual who is an occupant of the automobile or is moving towards the automobile with the intention of entering the vehicle (0012).
As per Claim 4, 11 and 18,
Golston discloses,
The method of claim 1,
further comprising determining types of the objects and locations of the objects in the vehicle (see at least [Col. 8, line 66 - Col. 9 Line 3] a depth sensor may detect a distance (e.g., a vehicle internal distance) between the depth sensor and an occupant or object (to detect occupant presence, occupant possessions, and/or other objects), [Col. 13, line 23-27] the computer vision analyzer 118 may detect one or more occupant possessions by detecting one or more objects in the image data (e.g., in one or more images, in one or more subsets of the image(s), etc.), and [Col. 14, line 28-33] Object recognition may be utilized to classify one or more objects, to determine what possessions an occupant has, to determine whether an object poses a safety risk, whether a possession has been left in the vehicle, etc.)
As per Claim 5, 12 and 19,
Golston, discloses
The method of claim 1, wherein
Further comprising initiating an information session that performs at least one of displaying information and playing audio inside a cabin of the vehicle (see at least [Col. 4, line 9-15] vehicle operations may include maintaining the comfort of occupant(s) and making adjustments to travel plans based on changes in circumstances. Some configurations may enable the vehicle to detect occupant status (e.g., occupant states) and/or situations and act accordingly so that the overall experience and driving environment is safe and comfortable, and [Col. 20, line 22-38] the vehicle operation determiner 124 may adjust cabin temperature, activate or deactivate audio (e.g., music), adjust audio volume, activate or deactivate video (e.g., a movie, a show, etc.), change video volume, adjust display brightness (of the display 132 and/or one or more other displays, for example), command a route change, send a reminder message (e.g., text message, email, etc.) to a vehicle occupant (or previous vehicle occupant) to remember a possession in the vehicle, prompt an occupant for input, send information (e.g., stream one or more camera outputs) to a remote device, etc.).
As per Claim 6, 13 and 20,
Golston, discloses
The method of claim 1, comprising
receiving occupant device sensor data from an occupant device (see at least [Fig. 3] health sensor, [ Fig. 12], [Col. 9, line 63-66] Additionally or alternatively, one or more sensors 122 may be separate from the electronic device 102 and may communicate with the electronic device 102, and [Col. 24, line 23-24] health sensors 350 (e.g., attached wellness sensors, fitness trackers, etc.) and
determining a condition of the occupant based on the received occupant device sensor data (see at least [Col. 9, line 28-33] A health sensor may indicate health data. For example, health sensors may indicate heart rate, occupant temperature, occupant motion, blood pressure, etc. Health sensors may provide an indication of occupant health status (e.g., whether one or more vital signs are within normal limits (and/or other threshold limits), [Col. 24, line 55-61] Health sensors (e.g., attached wellness sensors) may sense one or more types of information (e.g., heart rate, temperature, motion, etc.) that may indicate one or more aspects of occupant status. In some configurations, machine learning (ML) (in the health sensor 350 and/or in the vehicle 366, for example) may be utilized to learn about occupant status based on the sensor data from a health sensor 350 and [Col. 32, line 56-58] machine learning may be used in determining health state and occupant priority relation).
Claims 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Golston, Jung and Mayberry as per Claim 1 and further in view of Thomas Schulz et. al. DE102016010110 (“Schulz”).
As per Claim 2, 9 and 16,
Golston discloses,
The method of claim 1
Golston does not disclose,
identifying a time of week and time of day; and confirming the objects in the vehicle based on the one or more of the time of week and the time of day.
Schulz teaches,
identifying a time of week and time of day; (see at least [0011] The estimation can also include data sets that include, for example, the time of day, the season, the day of the week, the user's calendar entries or, for example, a holiday or event calendar at the vehicle's location. This can be used to determine whether it is a weekday that, for example, falls outside of the holiday season. Based on the time of day, it can then be estimated whether a family visit to relatives is more likely, or whether the user would rather drive to or from work.) and
confirming the objects in the vehicle based on the one or more of the time of week and the time of day (See at least [0018] on a weekday and at the appropriate time and in work clothes, a trip to the company headquarters can be suggested. Depending on the occupants, the time of day and, in particular, the occupants' clothing, additional trips may also be suggested, for example a trip to the opera, a trip to a restaurant or other excursion destinations. In particular, an event calendar for the respective region can be helpful in selecting potential events that match the time and clothing of the vehicle's occupants. In addition, user preferences can be collected and evaluated. For example, the music that the user prefers to listen to via the vehicle's media system can be used to estimate whether he or she is more likely to go to a classical opera or a rock concert, or to a jazz cellar or a techno club.)
Thus, Golston discloses a system and method of operating a vehicle based on sensor data and Schulz teaches to use time along with sensors data to determine travel destination.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Golston combined with the time of day analysis as taught by Schulz, with a reasonable expectation of success, to estimate purpose of the trip and limit the possible destinations that come closest to the expected purpose of the trip and enable easy and efficient selection of route [Schulz 0008].
Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Golston, Jung and Mayberry as per Claim 1 and further in view of Pier Paolo Porta US 11482019 B2 (“Pier”)
As per Claim 3, 10 and 17,
Golston, discloses
identifying one or more additional occupants (see at least Fig. 4, Fig. 5, and [Col. 2, line 1-3] The first portion of the sensor data may correspond to a first occupant and the second portion of the sensor data may correspond to a second occupant, [Col. 4, line 41-45] some configurations may recognize that an occupant (e.g., driver or passenger) has left something (e.g., a possession, a belonging, etc.) in the vehicle. Some configurations may monitor a child that is left in car while a parent has gone into a store, and [Col. 33, Line 9-13] occupant priority relation with different occupants may be obtained 963 (e.g., fetched) from a history (if more than one occupant is recognized, for example). For instance, person and/or facial recognition may be employed to determine each of the occupants).
analyzing one or more of the objects in the vehicle adjacent to the one or more additional occupants (see at least [Col. 8, line 66 - Col. 9 Line 3] a depth sensor may detect a distance (e.g., a vehicle internal distance) between the depth sensor and an occupant or object (to detect occupant presence, occupant possessions, and/or other objects), [Col. 12, line 54-58] computer vision analyzer 118 may perform object detection, object recognition, object tracking, object classification, face detection, face recognition, optical character recognition, scene understanding, emotion detection, comfort level detection, anxiety level detection, and/or optical character recognition, etc., and [Col. 25, line 46-52] The image sensor data may include images of two or more occupants (and/or possessions corresponding to two or more occupants), the weight sensor data may include weight sensor data for two or more occupants, the audio data may include audio sensor (e.g., microphone) data for two or more occupants, and/or the health sensor data may include health sensor data for two or more occupants, etc. ).
Goldston does not explicitly disclose,
an age of the one or more additional occupants. (Goldston discloses the use of facial recognition to determine occupants.)
Pier teaches,
Determining an age of the one or more additional occupants (see at least [Col. 16, Line 30-38] one or more of age, height and/or weight may be the determined biometric markers. The biometric markers may be used to differentiate between a child, an adolescent, a pregnant woman, a young adult, teenager, adult, etc. Feature maps may be detected and/or extracted while the video data is processed in the pipeline module 156 to generate inferences about body characteristics to determine age, gender, and/or condition (e.g., wrinkles, facial structure, bloodshot eyes, eyelids, signs of exhaustion, etc., and [Col. 33, line 60-62] The computer vision operations may determine that the face 510b has characteristics corresponding to a child (e.g., pre-teen facial features)).
Thus, Golston discloses identifying one or more occupants and objects in a vehicle using computer vision analysis and facial recognition and Pier teaches determining age, height, weight using biometric markers extracted from image processing and computer vision operations.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Golston combined with the age determination method as taught by Pier, with a reasonable expectation of success, to determine multiple objects and age of the occupants in a vehicle.
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Golston, Jung and Mayberry as per Claim 1 in view of Takamitsu Suzuki et. al. (US 10048079) (“Suzuki”)
As per Claim 7 and 14,
Golston discloses,
The method of claim 1, comprising identifying an additional occupants; ( see at least [Col. 2, line 1-3] The first portion of the sensor data may correspond to a first occupant and the second portion of the sensor data may correspond to a second occupant, and [Col. 25, line 46-52] The image sensor data may include images of two or more occupants (and/or possessions corresponding to two or more occupants), the weight sensor data may include weight sensor data for two or more occupants, the audio data may include audio sensor (e.g., microphone) data for two or more occupants, and/or the health sensor data may include health sensor data for two or more occupants, etc.
Golston does not explicitly disclose,
determining an additional destination based on the additional occupant.
Mayberry teaches,
determining an additional destination based on the additional occupant (see at least [0012] The travel destination may be additionally predicted based on other factors such as an attire worn by a co-occupant of the automobile and historical data associated with the occupant and/or the co-occupant, [0040] The travel destination may also be predicted based on other factors such as an attire worn by a co-occupant of the automobile, a time of travel, past history (historical data), and/or supplementary data, and [0056] Example 7 may include the method of example 6 and/or some other example herein, wherein each of the imaging system and the first computer system is located in the automobile, and wherein the second individual is a passenger in the automobile.)
Thus, Golston discloses identifying one or more additional occupants, and Mayberry teaches determining destination based on the co-passenger or additional passenger.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Golston combined with destination determination device taught by Mayberry, with a reasonable expectation of success, to predict based on other factors such as an attire worn by a co-occupant of the automobile [0040].
Suzuki, teaches,
determining an additional destination based on the additional occupant (see at least [Col. 1, Line 37-39] Although it is conceivable to pass through the multiple destinations in order, there is a need to determine one destination to be initially directed even in such a case, and [ Col. 2 Line 35-38] when the multiple occupants get on the vehicle, the destination to which the multiple occupants can agree can be determined with less effort).
Thus, Golston discloses identifying one or more additional occupants, and Suzuki teaches determining additional destinations based on the one or more additional occupants.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Golston combined with destination determination device taught by Suzuki, with a reasonable expectation of success, to obtain destination and routing to which all the occupants can agree.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicants should take note of the prior art in the PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASHUTOSH PANDE whose telephone number is (571)272-6269. The examiner can normally be reached Monday -Friday 9:00am -5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 5712721516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.P./Examiner, Art Unit 3668
/Fadey S. Jabr/Supervisory Patent Examiner, Art Unit 3668