DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office action is in response to the amendments filed on February 23, 2026. Claims 1-20 are currently pending, with Claims 1 and 18 being amended.
Response to Amendments
In response to Applicant’s amendments, filed February, 23, 2026, the Examiner withdraws some of the previous objections to the drawings, maintains the previous objections to the drawings relating reference number 202, and maintains the previous 35 U.S.C. 102 and 103 rejections.
Response to Arguments
Applicant's arguments filed February 23, 2026, have been fully considered but they are not persuasive.
Regarding Applicant’s arguments pertaining to the teachings of determining the context of the gaze based on an environmental condition of the vehicle and implementing a control signal in response (see pages 7-8 of instant arguments), the Examiner is unpersuaded. Mangin teaches that the occupant can direct their gaze toward a climate control region in the vehicle, so as to indicate a desire to regulate the temperature of the cabin (see at least Paragraphs [0055]-[0056], [0059] of Mangin). Mangin further teaches that the control switch assigns control instructions, and a signal is provided to the device that is to be controlled, by generating a visual, optic, and/or acoustic feedback indicating which element is controlled in response to interpreting the user’s gaze (see at least Paragraphs [0044]-[0045] of Mangin). In other words, Mangin teaches that the user’s gaze can be indicative of a desire to change an environmental condition within the vehicle (i.e., a gaze directed toward the climate control components of the vehicle), and the system provides control signals to vehicle components indicative of the gaze context. Applicant’s arguments that control signals are based on the manual input of the user, are considered irrelevant as the claims do not preclude the use of manual input, and only require that the control signal is generated based on the gaze context. As such, Mangin teaches the features of the claims as written, and the Examiner maintains the corresponding rejections.
The remaining arguments are essentially the same as those addressed above and/or below and are unpersuasive for essentially the same reasons. Therefore, the corresponding rejections are maintained.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: Reference to a “vehicle” and “another vehicle” as indicated by reference number 202, is not referenced in the drawings.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 7, 10, 12, and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Publication No. 2017/0293355 A1, to Mangin (hereinafter referred to as Mangin; previously of record).
As per Claim 1, Mangin discloses the features of a method of controlling vehicle components (e.g. Paragraph [0031]; where an apparatus (110) assigns control instructions to a vehicle) comprising:
monitoring, using one or more optical sensors and control circuitry (e.g. Paragraph [0029]; where an optical sensor (106) of a camera (108) of an occupant detection device is directed toward a head of an occupant in order to monitor the occupant),
a first gaze of a first vehicle occupant to identify a first settled gaze (e.g. Paragraph [0029]; where the driver monitoring camera (108) is configured to detect a gaze direction and/or head posture of occupant (104));
determining, using one or more context sensors and the control circuitry, a first gaze context occurring contemporaneously with a first settled gaze (e.g. Paragraph [0033]; where the occupant (104) is gazing toward the device (112) of the vehicle (100) with the intention of controlling device (112) (i.e. settled gaze)), wherein
the first gaze context comprises data collected by the one or more context sensors (e.g. Paragraph [0029]; where a driver monitoring camera is configured to detect a gaze direction and/or a head posture of an occupant) indicating
an environmental condition internal or external to the vehicle (e.g. Paragraphs [0056], [0059]; where temperature in the passenger car can be regulated upward or downward based on a detected gaze toward a climate control region in the vehicle (i.e. the gaze is directed toward changing an environmental condition within the vehicle));
identifying, using the control circuitry, a first vehicle component based on the first settled gaze and the first gaze context (e.g. Paragraphs [0034]; where the occupant detection device (102) generates an occupant gaze datum (120) using a gaze direction datum, and furnishes the datum to a suitable interface to apparatus (110)); and
generating, at input/output circuitry, a control signal to control the first vehicle component in a manner determined based at least in part on the first gaze context (e.g. Paragraphs [0037], [0055]-[0056]; where the apparatus (110) allows occupant (104) to apply control via first control device (114) based on the gaze detection; and where the climate control region can be regulated upward or downward based on a detected gaze toward the climate control region).
As per Claim 2, Mangin discloses the features Claim 1, and Mangin further discloses the features of further comprising determining that the first settled gaze is directed towards the first vehicle component (e.g. Paragraph [0033]; where the occupant is gazing toward device (112) of vehicle (100) with the intention of controlling device (112) or modifying a current setting of device (112)).
As per Claim 3, Mangin discloses the features Claim 1, and Mangin further discloses the features of further comprising determining that the first settled gaze is directed towards a second vehicle component associated with the first vehicle component (e.g. Paragraph [0038]; where the apparatus (110) assigns a control instruction of the first control device (114) to a further device (116) in order to control the further device (116) with the first control device (114)).
As per Claim 4, Mangin discloses the features Claim 1, and Mangin further discloses the features of further comprising determining that the first settled gaze is directed towards a predetermined direction (e.g. Paragraphs [0029], [0034]; where the driver monitoring camera (108) is configured to detect a gaze direction of the occupant (104), and the detected data are processed in a computation unit of the occupant detection device (102) in order to establish the direction in which the driver (104) is gazing).
As per Claim 7, Mangin discloses the features Claim 1, and Mangin further discloses the features of further comprising determining the first gaze context based on an event detected by the one or more context sensors (e.g. Paragraphs [0029], [0034]; where the driver monitoring camera (108) is configured to detect a gaze direction of the occupant (104), and the detected data are processed in a computation unit of the occupant detection device (102) in order to establish the direction in which the driver (104) is gazing, and determine if the driver’s (104) attention is directed toward the road or is diverted (i.e. context)).
As per Claim 10, Mangin discloses the features Claim 1, and Mangin further discloses the features of further comprising determining the first gaze context based on a vehicle sub-system status (e.g. Paragraph [0038]; where the apparatus (110) assigns a control instruction of the first device (114) to further device (116) using a second occupant gaze datum (128) that represents a gaze (130) toward further device (116) (i.e. subsystem)).
As per Claim 12, Mangin discloses the features Claim 1, and Mangin further discloses the features of further comprising determining the first gaze context based on a second gaze by the first vehicle occupant, the second gaze preceding the first settled gaze (e.g. Paragraph [0038]; where the apparatus (110) assigns a control instruction of the first device (114) to further device (116) using a second occupant gaze datum (128) that represents a gaze (130) toward further device (116)).
As per Claim 18, Mangin discloses the features of a system to control vehicle components (e.g. Paragraph [0031]; where an apparatus (110) assigns control instructions to a vehicle) comprising:
input/output circuitry (e.g. Paragraph [0017]; where the apparatus has a computation unit for processing data, receiving and outputting data);
a first optical sensor (e.g. Paragraph [0029]; where an optical sensor (106) of a camera (108) of an occupant detection device is directed toward a head of an occupant in order to monitor the occupant);
one or more context sensors (e.g. Paragraph [0029]; where the driver monitoring camera (108) is configured to detect a gaze direction and/or head posture of occupant (104)); control circuitry configured to
monitor, using the one or more optical sensors, a gaze of a first vehicle occupant to determine a first settled gaze (e.g. Paragraph [0029]; where the driver monitoring camera (108) is configured to detect a gaze direction and/or head posture of occupant (104));
determine, using the one or more context sensors, a first gaze context occurring contemporaneously with the first settled gaze (e.g. Paragraph [0033]; where the occupant (104) is gazing toward the device (112) of the vehicle (100) with the intention of controlling device (112) (i.e. settled gaze));
identify a first vehicle component based on the first settled gaze and the first gaze context (e.g. Paragraphs [0034]; where the occupant detection device (102) generates an occupant gaze datum (120) using a gaze direction datum, and furnishes the datum to a suitable interface to apparatus (110)); and
generating, at input/output circuitry, a control signal to control the first vehicle component in a manner determined based at least in part on the first gaze context (e.g. Paragraphs [0037], [0055]-[0056]; where the apparatus (110) allows occupant (104) to apply control via first control device (114) based on the gaze detection; and where the climate control region can be regulated upward or downward based on a detected gaze toward the climate control region).
As per Claim 19, Mangin discloses the features Claim 18, and Mangin further discloses the features of wherein the control circuitry is configured to determine that the first settled gaze is directed towards the first vehicle component (e.g. Paragraph [0033]; where the occupant is gazing toward device (112) of vehicle (100) with the intention of controlling device (112) or modifying a current setting of device (112)).
As per Claim 20, Mangin discloses the features Claim 18, and Mangin further discloses the features of wherein the control circuitry is configured to determine that the first settled gaze is directed towards a second vehicle component associated with the first vehicle component (e.g. Paragraph [0038]; where the apparatus (110) assigns a control instruction of the first control device (114) to a further device (116) in order to control the further device (116) with the first control device (114)).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 5-6 and 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2017/0293355 A1, to Mangin (hereinafter referred to as Mangin; previously of record), in view of U.S. Patent Publication No. 2017/0212583 A1, to Krasadakis (hereinafter referred to as Krasadakis; previously of record).
As per Claim 5, Mangin discloses the features of Claim 1, but Mangin fails to disclose every feature of further comprising determining the first gaze context based on a first occupant profile associated with the first vehicle occupant.
However, Krasadakis, in a similar field of endeavor, teaches a method for allowing a user to navigate various content through the use of gaze tracking, where application (226) on server (222) may build a user profile based on the activity of the user, including the eye gaze and movement of the user, which may be added to a user’s profile (e.g. Paragraph [0043]).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for assigning control instructions in a vehicle in the system of Mangin, with the feature of determining a user profile in the system of Krasadakis, in order to increase the ability to communicate more accurately with other devices (see at least Paragraph [0024] of Krasadakis).
As per Claim 6, Mangin, in view of Krasadakis, teaches the features of Claim 5, and Krasadakis further teaches the features of wherein the first occupant profile comprises occupant habits data based on historical actions of the first vehicle occupant.
However, Krasadakis, in a similar field of endeavor, teaches a method for allowing a user to navigate various content through the use of gaze tracking, where historical data may be used to build a profile of a user’s interests, past sessions, and current viewing habits (e.g. Paragraph [0018]-[0020]).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for assigning control instructions in a vehicle in the system of Mangin, with the feature of determining a user profile based on user habits in the system of Krasadakis, in order to increase the ability to communicate more accurately with other devices (see at least Paragraph [0024] of Krasadakis).
As per Claim 8, Mangin discloses the features of Claim 1, but Mangin fails to disclose every feature of further comprising determining the first gaze context based on a sound detected by the one or more context sensors.
However, Krasadakis, in a similar field of endeavor, teaches a method for allowing a user to navigate various content through the use of gaze tracking, where the microphone captures audio from the user (102); and where gaze tracking may be utilized in combination with voice commands, which may be implemented via natural language processing (NLP) (e.g. Paragraphs [0024], [0029], [0062]).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for assigning control instructions in a vehicle in the system of Mangin, with the feature of determining a context based on sound in the system of Krasadakis, in order to increase the ability to communicate more accurately with other devices, and adapt in real-time (see at least Paragraphs [0020], [0024] of Krasadakis).
As per Claim 9, Mangin discloses the features of Claim 1, but Mangin fails to disclose every feature of further comprising determining the first gaze context based on one or more spoken words detected by the one or more context sensors.
However, Krasadakis, in a similar field of endeavor, teaches a method for allowing a user to navigate various content through the use of gaze tracking, where the microphone captures audio from the user (102); and where gaze tracking may be utilized in combination with voice commands, which may be implemented via natural language processing (NLP); and where gaze tracking may be utilized in combination with voice commands; and where particular passages or words may be associated with the user (e.g. Paragraphs [0021], [0024], [0029], [0062]).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for assigning control instructions in a vehicle in the system of Mangin, with the feature of determining a context based on spoken words in the system of Krasadakis, in order to increase the ability to communicate more accurately with other devices, and adapt in real-time (see at least Paragraphs [0020], [0024] of Krasadakis).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2017/0293355 A1, to Mangin (hereinafter referred to as Mangin; previously of record), in view of U.S. Patent Publication No. 2013/0030811 A1, to Olleon, et al (hereinafter referred to as Olleon; previously of record).
As per Claim 11, Mangin discloses the features of Claim 1, but Mangin fails to disclose every feature of further comprising determining the first gaze context based on one or more objects detected external to the vehicle using the one or more context sensors.
However, Olleon, in a similar field of endeavor, teaches a method for providing an interface for querying about external objects using gestures and eye gaze, where the user can gaze and point toward a restaurant and request information about the restaurant or ask for more information about a road sign, and the vehicle can use environmental sensors to gather data about objects external to the vehicle (i.e. detected objects external to the vehicle) (e.g. Paragraphs [0041]-[0042]; Figures 7-9).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for assigning control instructions in a vehicle in the system of Mangin, with the feature of determining context based on external objects in the system of Olleon, in order to increase accuracy of the speech recognition and query accuracy (see at least Paragraphs [0025], [0027] of Olleon).
Claims 13-17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2017/0293355 A1, to Mangin (hereinafter referred to as Mangin), in view of U.S. Patent Publication No. 2013/0030645 A1, to Divine, et al (hereinafter referred to as Divine; previously of record).
As per Claim 13, Mangin discloses the features of Claim 1, but Mangin fails to disclose every feature of further comprising: determining, using the optical sensor and the control circuitry, a first occupant profile of the first vehicle occupant; and determining, using the optical sensor and the control circuitry, a second occupant profile of a second vehicle occupant.
However, Divine, in a similar field of endeavor, teaches a method for auto-control of an infotainment system based on characteristics of vehicle occupants, where the computer memory (15) stores content usage history and passenger profile information; and where use profiles for each of the vehicle occupants can be generated (e.g. Paragraphs [0026], [0037], [0038]).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for assigning control instructions in a vehicle in the system of Mangin, with the feature of determining a user profile of each occupant in the system of Divine, in order to provide allowable content to each occupant (see at least Paragraph [0055] of Divine).
As per Claim 14, Mangin, in view of Divine, teaches the features of Claim 13, and Divine further teaches the features of further comprising identifying one of the first vehicle occupant and the second vehicle occupant as a driver occupant, and the other of the first vehicle occupant and the second vehicle occupant as a passenger occupant.
However, Divine, in a similar field of endeavor, teaches a method for auto-control of an infotainment system based on characteristics of vehicle occupants, where the computer memory (15) stores content usage history and passenger profile information; and where use profiles for each of the vehicle occupants can be generated; and where for each seat position in the vehicle, an occupant map may include occupant status in relation to the vehicle driver, and the system may apply different rules when the occupant is identified as the driver; and where the system may support a dual view display for tracking the eye movements of the occupant and the driver (e.g. Paragraphs [0039], [0044], [0061], [0071], [0073]; Figures 2, 3).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for assigning control instructions in a vehicle in the system of Mangin, with the feature of determining a identifying each occupant in the system of Divine, in order to provide allowable content to each occupant and increase safety (see at least Paragraph [0055] of Divine).
As per Claim 15, Mangin, in view of Divine, teaches the features of Claim 13, and Divine further teaches the features of wherein one of the first occupant profile and the second occupant profile comprises a default occupant profile.
However, Divine, in a similar field of endeavor, teaches a method for auto-control of an infotainment system based on characteristics of vehicle occupants, where priority may be given to the driver for providing content recommendations; and where a default mode is implemented based on content recommendation (e.g. Paragraph [0068]; Figure 10).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for assigning control instructions in a vehicle in the system of Mangin, with the feature of identifying a default profile in the system of Divine, in order to provide allowable content to each occupant and increase safety (see at least Paragraph [0055] of Divine).
As per Claim 16, Mangin, in view of Divine, teaches the features of Claim 13, and Divine further teaches the features of wherein the first gaze context is based on the first occupant profile and the second occupant profile.
However, Divine, in a similar field of endeavor, teaches a method for auto-control of an infotainment system based on characteristics of vehicle occupants, where combined context information is provided based on comparison of points of interest and selection of input from the occupants (e.g. Figures 8, 12B, 13A-B).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for assigning control instructions in a vehicle in the system of Mangin, with the feature of identifying the context in the system of Divine, in order to provide allowable content to each occupant and increase safety (see at least Paragraph [0055] of Divine).
As per Claim 17, Mangin, in view of Divine, teaches the features of Claim 13, and Divine further teaches the features of further comprising determining the first gaze context based on an authorization priorities indicator between the first occupant profile and the second occupant profile.
However, Divine, in a similar field of endeavor, teaches a method for auto-control of an infotainment system based on characteristics of vehicle occupants, where priority may be given to the driver for providing content recommendations regardless of the seat location (e.g. Paragraph [0068]).
It would have been obvious to a person of ordinary skill in the art on or before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the method for assigning control instructions in a vehicle in the system of Mangin, with the feature of identifying the priority of a user profile in the system of Divine, in order to provide allowable content to each occupant and increase safety (see at least Paragraph [0055] of Divine).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Graumann (U.S. 2014/0348389 A1), which teaches a method for controlling devices based on a detected gaze.
Li (U.S. 2020/0349937 A1), which teaches a method for presenting location information and implementing a task based on user gaze.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MERRITT LEVY whose telephone number is (571)270-5595. The examiner can normally be reached Mon-Fri 0630-1600.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn can be reached at (571) 272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MERRITT LEVY/Examiner, Art Unit 3663
/ABBY J FLYNN/Supervisory Patent Examiner, Art Unit 3663