DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Examiner’s Note
Examiner has cited particular paragraphs/columns and line numbers or figures in the references as applied to the claims below for convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations with the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Applicant is reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. Furthermore, the Examiner is not limited to the Applicant’s definition which is not specifically set forth in the claims.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant's cooperation is requested in correcting any errors of which applicant may become aware of, in the specification.
Status of Application
The list of claims 1-14 is pending in this application. In the claim set filed 11/28/2025:
Claim(s) 1 is/are the independent claim(s) observed in the application.
Claim(s) 1-3, 5, 6, 10 and 13 has/have been indicated as amended.
Claim(s) 4, 7-9, 11, 12 and 14 has/have been indicated as originally presented.
Response to Arguments
With respect to Applicant’s remarks filed on 11/28/2025; Applicant's “Amendments and Remarks” have been fully considered. Applicant’s remarks will be addressed in sequential order as they were presented.
With respect to the objection(s) of claim(s) 3, 5, 6, 10 and 13, Applicant’s “Amendments and Remarks” have been fully considered and are persuasive. Therefore, the objection(s) of claim(s) 3, 5, 6, 10 and 13 has/have been withdrawn.
With respect to the rejection(s) of claim(s) 5 and 10 under 35 U.S.C. § 112(b), Applicant’s “Amendments and Remarks” have been fully considered and are persuasive. Therefore, the rejection(s) of claim(s) 5 and 10 under 35 U.S.C. § 112(b) has/have been withdrawn.
With respect to the rejection(s) of claim(s) 1-14 under 35 U.S.C. § 101, Applicant’s “Amendments and Remarks” have been fully considered but have NOT been persuasive.
The Applicant argues that the amendment to claim 1 reciting “utilizing at least one sensor” to ascertain “conditions within a passenger compartment of the motor vehicle, ambient conditions around the motor vehicle and operating parameters of the motor vehicle” results in the claimed process no longer falling into the category of a mental process.
The Examiner respectfully disagrees. The Applicant’s amendment recites a method of using a generic sensor to gather broadly recited “conditions within a passenger compartment of the motor vehicle, ambient conditions around the motor vehicle and operating parameters of the motor vehicle.” One of ordinary skill in the art would interpret this step as mere data gathering, and this limitation would therefore be considered as Insignificant Extra-Solution Activity, specifically pre-solution activity as explained in MPEP § 2106.05(g). Therefore, the amendment reciting “utilizing at least one sensor” is not a sufficient additional element such that the claimed process is no longer falling into the category of a mental process.
As a result, the rejection(s) of claim(s) 1-14 under 35 U.S.C. § 101 has/have been maintained.
With respect to the rejection(s) of claim(s) 1-14 under 35 U.S.C. § 102(a)(1) and 35 U.S.C. § 103, Applicant’s “Amendments and Remarks” have been fully considered but have NOT been persuasive.
In particular, the Applicant argues the Shintani does not specifically state the amended claim limitation reciting: “evaluating the ascertained conditions within a passenger compartment, ambient conditions, operating parameters, and information about the state of the first passenger” but rather that Shintani “only utilizes information about the state of the passenger.”
The Examiner respectfully disagrees. Shintani discloses a plurality of sensors for gathering information as follows:
“The agent device 1 has a control unit (or a controller) 100, a sensor unit 11 (that includes a global positioning system (GPS) sensor 111, a vehicle speed sensor 112, and a gyro sensor 113 and may include a temperature sensor inside or outside the vehicle, a temperature sensor of a seat or a steering wheel, or an acceleration sensor), a vehicle information unit 12, a storage unit 13, a wireless unit 14 (that includes a proximity wireless communication unit 141 and a wireless network communication unit 142), a display unit 15, an operation input unit 16, an audio unit 17 (an audio (or voice) output unit), a navigation unit 18, an image capturing unit 191 (an in-vehicle camera), an audio input unit 192 (a microphone), and a timing unit (a clock) 193, as illustrated in FIG. 2, for example. The clock may be a component which employs time information of a GPS described later;” [Shintani; Fig. 2; ¶: 0017];
And “The operation input unit 16 senses an input operation, such as steering, that is useful for estimating (or presuming) a feeling (or emotion) of an occupant, an amount of depression of an accelerator pedal or a brake pedal, operation of a window or an air conditioner (a temperature setting or a measurement of the temperature sensor inside or outside the vehicle) in addition to operation of pressing a switch or the like;” [Shintani; ¶: 0018].
One of ordinary skill in the art would consider at least the cited paragraphs and figures of Shintani as patentably indistinct from the Applicant’s broadly recited limitation stating: “evaluating the ascertained conditions within a passenger compartment, ambient conditions, operating parameters, and information about the state of the first passenger.”
As a result, the rejection(s) of claim(s) 1-14 under 35 U.S.C. § 102(a)(1) and 35 U.S.C. § 103 has/have been maintained.
Office Note: Due to applicant’s amendments, further claim rejections appear on the record as stated in the Final Office Action below.
Final Office Action
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-14 is/are rejected under 35 USC 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claim(s) 1 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites using at least one sensor to ascertain “conditions with a passenger compartment,” “ambient conditions around the motor vehicle and operating parameters of the motor vehicle” (these limitations broadly recite receiving data), “evaluating information about a state of a first passenger and selecting information and/or entertainment from a result” (these limitations broadly recite analyzing the previously received data), and “and providing the information and/or entertainment selected for the first passenger” (these limitations broadly recite outputting information based on the previously analyzed data).
The limitations presented above, which present steps of: receiving data from a broadly recited sensor, analyzing the received data, and outputting information based on the previously analyzed data, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of insignificant extra-solution activity. That is, other than reciting “utilizing at least one sensor” (which is pre-solution activity) and “providing the information and/or entertainment selected for the first passenger,” (which is post-solution activity) nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “utilizing at least one sensor” and “providing the information and/or entertainment selected for the first passenger” language, receiving data and analyzing the received data in the context of the claim(s) encompasses the user manually performing steps of analyzing gathered data and generating a selecting an appropriate information output response based on the results of the analysis. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of insignificant extra-solution, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim recites two additional elements – “utilizing at least one sensor” and “providing the information and/or entertainment selected for the first passenger.” The “utilizing at least one sensor” and “providing information” in these steps is recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception by incorporating an additional element(s) in the claimed method. Accordingly, this additional element(s) does/do not integrate the abstract idea into a practical application because it/they does/do not impose any meaningful limits on practicing the abstract idea. Therefore, The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “providing the information and/or entertainment selected for the first passenger” based on the analysis of previously gathered data steps “utilizing at least one sensor” amount(s) to no more than mere instructions to apply the exception by incorporating an additional element(s) in the claimed method. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Dependent claim(s) 2-14 when analyzed as a whole, is/are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. The additional element(s), if any, in the dependent claim(s) is/are not sufficient to amount to significantly more than the judicial exception for the same reasons as with claim(s) 1.
Office Note: In order to overcome this rejection, the Office suggests further defining the limitations of the independent claim(s), for example linking the claimed subject matter to a non-generic device and controlling a vehicle or an apparatus in some way with the output of the data or further showing that the claimed subject matter is an improvement to a technical field. Limitations such as these suggested above would further bring the claimed subject matter out of the realm of abstract idea and into the realm of a statutory category.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-5, 7, 8 and 10-14 is/are rejected under 35 U.S.C. 102 (a) (1) as being anticipated by Shintani et al. (United States Patent Publication 2018/0096699 A1), referenced as Shintani moving forward.
With respect to claim 1, Shintani discloses:
“A method for providing information and entertainment in a motor vehicle, the method comprising: ascertaining conditions within a passenger compartment of the motor vehicle, ambient conditions around the motor vehicle and operating parameters of the motor vehicle, utilizing at least one sensor” [Shintani; "The vehicle information unit 12 acquires vehicle information via an in-vehicle network such as a CAN-BUS (CAN). The vehicle information includes information on the ON/OFF states of an ignition switch, an operation state of a safety system (Advanced Driving Assistant System (ADAS), Antilock Brake System (ABS), an airbag, and the like), or the like. The operation input unit 16 senses an input operation, such as steering, that is useful for estimating (or presuming) a feeling (or emotion) of an occupant, an amount of depression of an accelerator pedal or a brake pedal, operation of a window or an air conditioner (a temperature setting or a measurement of the temperature sensor inside or outside the vehicle) in addition to operation of pressing a switch or the like. A storage unit 13 of the agent device 1 has a sufficient storage capacity for continuously storing voice data of occupants during driving of the vehicle. Further, various information may be stored on the server 3;" ¶: 0018;
"The traffic state information acquisition unit 414 acquires traffic state information. A traveling cost (a distance, a required traveling time, a degree of traffic congestion, or an amount of energy consumption) of a navigation route or roads included in the area covering the navigation route or a link of the roads transmitted to the information-providing device 4 from the server 3 may be acquired as traffic state information;" ¶: 0031;
"The information acquisition unit 410 acquires voice data or realtime data of an occupant of the vehicle X (FIG. 5, STEP 102). An utterance or a conversation of one or a plurality of occupants in a cabin of the vehicle X detected by the audio input unit 192 (292) is acquired as voice data;" ¶: 0036];
“ascertaining information about a state of a first passenger, utilizing at least one sensor” [Shintani; "when it is determined that the occupants in the vehicle X are excited (or the same keyword or phrase is repeated) (FIG. 5, STEP 106, YES or STEP 108, YES), the target keyword designation unit 423 determines the past certain time range (the length ranging from several seconds to several-ten seconds) occurring before the time when the occupants are excited. The past target time range of a certain time period (for example, one minute) before the time when the estimated feeling value above the threshold occurred is determined (FIG. 5, STEP 110). The target keyword designation unit 423 designates a target keyword from the keywords extracted from the voice data during the target time range and then outputs the target keyword via at least one of the display unit 15 (25) and the audio output unit 17 (27) (FIG. 5, STEP 112);" Fig. 5; ¶: 0041; See also: ¶: 0035-0040, 0042-0046];
“evaluating the ascertained conditions within a passenger compartment, ambient conditions, operating parameters, and information about the state of the first passenger” [Shintani; "The agent device 1 has a control unit (or a controller) 100, a sensor unit 11 (that includes a global positioning system (GPS) sensor 111, a vehicle speed sensor 112, and a gyro sensor 113 and may include a temperature sensor inside or outside the vehicle, a temperature sensor of a seat or a steering wheel, or an acceleration sensor), a vehicle information unit 12, a storage unit 13, a wireless unit 14 (that includes a proximity wireless communication unit 141 and a wireless network communication unit 142), a display unit 15, an operation input unit 16, an audio unit 17 (an audio (or voice) output unit), a navigation unit 18, an image capturing unit 191 (an in-vehicle camera), an audio input unit 192 (a microphone), and a timing unit (a clock) 193, as illustrated in FIG. 2, for example. The clock may be a component which employs time information of a GPS described later;" Fig. 2; ¶: 0017;
"The operation input unit 16 senses an input operation, such as steering, that is useful for estimating (or presuming) a feeling (or emotion) of an occupant, an amount of depression of an accelerator pedal or a brake pedal, operation of a window or an air conditioner (a temperature setting or a measurement of the temperature sensor inside or outside the vehicle) in addition to operation of pressing a switch or the like;" ¶: 0018;
"The information-providing device 4 includes the control unit 100 (200) and, in accordance with the operation thereof, may acquire realtime information or accumulated information from the sensor unit 11 (22), the vehicle information unit 12, the wireless unit 14 (24), the operation input unit 16, the audio unit 17, the navigation unit 18, the image capturing unit 191 (291), the audio input unit 192 (292), the timing unit (the clock) 193, and the storage unit 13 (23) if necessary, and may provide information (content) to the occupants via the display unit 15 (25) or the audio output unit 17 (27). Further, information necessary for ensuring optimal use of the information-providing device 4 by the occupants is stored in the storage unit 13 (23);" Fig. 4; ¶: 0024; See also: ¶: 0031];
“and selecting information and/or entertainment from a result; and providing the information and/or entertainment selected for the first passenger” [Shintani; "when it is determined that the occupants in the vehicle X are excited (or the same keyword or phrase is repeated) (FIG. 5, STEP 106, YES or STEP 108, YES), the target keyword designation unit 423 determines the past certain time range (the length ranging from several seconds to several-ten seconds) occurring before the time when the occupants are excited. The past target time range of a certain time period (for example, one minute) before the time when the estimated feeling value above the threshold occurred is determined (FIG. 5, STEP 110). The target keyword designation unit 423 designates a target keyword from the keywords extracted from the voice data during the target time range and then outputs the target keyword via at least one of the display unit 15 (25) and the audio output unit 17 (27) (FIG. 5, STEP 112);" Fig. 5; ¶: 0041; See also: ¶: 0035-0040, 0042-0046].
With respect to claim 2, Shintani discloses: “The method as claimed in claim 1, further comprising ascertaining information about a state and/or behavior of any further passengers” [Shintani; "In accordance with information including a conversation between occupants of the vehicle X, the excitement determination unit 421 determines whether or not the feeling or the atmosphere of occupants in the vehicle X corresponds to excitement (FIG. 5, STEP 106). This process corresponds to a primary determination process for determining the presence or absence of excitement;" Fig. 5; ¶: 0038; See also: ¶: 0035-0037, 0039-0046].
With respect to claim 3, Shintani discloses: “The method as claimed in claim 2, wherein evaluating the ascertained conditions includes utilizing the ascertained information about the state and/or behavior of any further passengers and further comprising providing the information and/or entertainment for the first passenger and the further passengers from the result” [Shintani; "when it is determined that the occupants in the vehicle X are excited (or the same keyword or phrase is repeated) (FIG. 5, STEP 106, YES or STEP 108, YES), the target keyword designation unit 423 determines the past certain time range (the length ranging from several seconds to several-ten seconds) occurring before the time when the occupants are excited. The past target time range of a certain time period (for example, one minute) before the time when the estimated feeling value above the threshold occurred is determined (FIG. 5, STEP 110). The target keyword designation unit 423 designates a target keyword from the keywords extracted from the voice data during the target time range and then outputs the target keyword via at least one of the display unit 15 (25) and the audio output unit 17 (27) (FIG. 5, STEP 112);" Fig. 5; ¶: 0041; See also: ¶: 0035-0040, 0042-0046].
With respect to claim 4, Shintani discloses: “The method as claimed in claim 2, wherein reactions of the first passenger and any further passengers to the information and/or entertainment provided are ascertained and stored in a storage medium and taken into consideration by a memory reading step for a subsequent selection of information and/or entertainment to be provided” [Shintani; "The information acquisition unit 410 acquires occupant state information indicating a state of the occupant when the occupant perceives a target keyword, and the feeling estimation and determination unit 4211 estimates a second feeling from a reaction of the occupant in accordance with the occupant state information (second information) (FIG. 5, STEP 114). Specifically, with the second information being input, a filter created by machine learning, such as deep learning, or by a support vector machine is used to estimate a feeling of the occupant;" Fig. 5; ¶: 0042;
"The information generating unit 430 determines whether or not the second feeling of the occupant to the target keyword estimated by the feeling estimation and determination unit 4211 corresponds to affirmation (sympathy or the like) (FIG. 5, STEP 116). When it is determined that the second feeling of the occupant does not correspond to affirmation such as corresponding to denial (FIG. 5, STEP 116, NO), the process on and after the determination of presence or absence of excitement is repeated (see FIG. 5, STEP 106 to STEP 116). On the other hand, when it is determined that the second feeling of the occupant corresponds to affirmation (FIG. 5, STEP 116, YES), the information generating unit 430 acquires information associated with the target keyword (FIG. 5, STEP 118). Such information may be searched from an external information source each time. In this case, the external information frequently obtained (automatically transmitted) from the external information source may be temporarily stored in the storage unit 13 (23), and information may be selected therefrom. The information generating unit 430 outputs this information via at least one of the display unit 15 (25) and the audio output unit 17 (27) (FIG. 5, STEP 120). This output information is provided as “information suitable for a content of a conversation between occupants of the vehicle X” or “information suitable for an atmosphere of occupants of the vehicle X”;" Fig. 5; ¶: 0044; See also: ¶: 0035-0041, 0043, 0045, 0046].
With respect to claim 5, Shintani discloses: “The method as claimed in claim 4, wherein the first passenger is identified and is associated with the reactions stored, which is taken into consideration by the memory reading step for a subsequent selection of the information and/or entertainment to be provided” [Shintani; "The information generating unit 430 determines whether or not the second feeling of the occupant to the target keyword estimated by the feeling estimation and determination unit 4211 corresponds to affirmation (sympathy or the like) (FIG. 5, STEP 116). When it is determined that the second feeling of the occupant does not correspond to affirmation such as corresponding to denial (FIG. 5, STEP 116, NO), the process on and after the determination of presence or absence of excitement is repeated (see FIG. 5, STEP 106 to STEP 116). On the other hand, when it is determined that the second feeling of the occupant corresponds to affirmation (FIG. 5, STEP 116, YES), the information generating unit 430 acquires information associated with the target keyword (FIG. 5, STEP 118). Such information may be searched from an external information source each time. In this case, the external information frequently obtained (automatically transmitted) from the external information source may be temporarily stored in the storage unit 13 (23), and information may be selected therefrom. The information generating unit 430 outputs this information via at least one of the display unit 15 (25) and the audio output unit 17 (27) (FIG. 5, STEP 120). This output information is provided as “information suitable for a content of a conversation between occupants of the vehicle X” or “information suitable for an atmosphere of occupants of the vehicle X”;" Fig. 5; ¶: 0044].
With respect to claim 7, Shintani discloses: “The method as claimed in claim 1, further comprising ascertaining data of a map” [Shintani; "The traffic state information acquisition unit 414 acquires traffic state information. A traveling cost (a distance, a required traveling time, a degree of traffic congestion, or an amount of energy consumption) of a navigation route or roads included in the area covering the navigation route or a link of the roads transmitted to the information-providing device 4 from the server 3 may be acquired as traffic state information. A navigation route is calculated by the navigation unit 18 or the navigation function of the mobile terminal device 2 or the server 3 for a plurality of continuous links from the current location or a starting location to the destination location. The current location of the information-providing device 4 is measured by the GPS sensor 111 (211). The starting location and the destination location are set by an occupant via the operation input unit 16 (26) or the audio input unit 192 (292);" ¶: 0031].
With respect to claim 8, Shintani discloses: “The method as claimed in claim 7, further comprising ascertaining a current location of the motor vehicle and a destination for a current journey” [Shintani; "The traffic state information acquisition unit 414 acquires traffic state information. A traveling cost (a distance, a required traveling time, a degree of traffic congestion, or an amount of energy consumption) of a navigation route or roads included in the area covering the navigation route or a link of the roads transmitted to the information-providing device 4 from the server 3 may be acquired as traffic state information. A navigation route is calculated by the navigation unit 18 or the navigation function of the mobile terminal device 2 or the server 3 for a plurality of continuous links from the current location or a starting location to the destination location. The current location of the information-providing device 4 is measured by the GPS sensor 111 (211). The starting location and the destination location are set by an occupant via the operation input unit 16 (26) or the audio input unit 192 (292);" ¶: 0031].
With respect to claim 10, Shintani discloses: “The method as claimed in claim 1, further comprising registering at least one of states of a flow of traffic that the motor vehicle is in, a driving task, navigation directions, a driving time already required, an estimated driving time required to a destination and/or a current whereabouts of the motor vehicle” [Shintani; "The traffic state information acquisition unit 414 acquires traffic state information. A traveling cost (a distance, a required traveling time, a degree of traffic congestion, or an amount of energy consumption) of a navigation route or roads included in the area covering the navigation route or a link of the roads transmitted to the information-providing device 4 from the server 3 may be acquired as traffic state information. A navigation route is calculated by the navigation unit 18 or the navigation function of the mobile terminal device 2 or the server 3 for a plurality of continuous links from the current location or a starting location to the destination location. The current location of the information-providing device 4 is measured by the GPS sensor 111 (211). The starting location and the destination location are set by an occupant via the operation input unit 16 (26) or the audio input unit 192 (292);" ¶: 0031].
With respect to claim 11, Shintani discloses: “The method as claimed in claim 1, further comprising registering at least one of the operating parameters: vehicle speed, engine speed, energy reserve, engine oil temperature, yaw rate, tire pressure or acceleration” [Shintani; "The agent device 1 has a control unit (or a controller) 100, a sensor unit 11 (that includes a global positioning system (GPS) sensor 111, a vehicle speed sensor 112, and a gyro sensor 113 and may include a temperature sensor inside or outside the vehicle, a temperature sensor of a seat or a steering wheel, or an acceleration sensor);" ¶: 0017;
"The operation input unit 16 senses an input operation, such as steering, that is useful for estimating (or presuming) a feeling (or emotion) of an occupant, an amount of depression of an accelerator pedal or a brake pedal, operation of a window or an air conditioner (a temperature setting or a measurement of the temperature sensor inside or outside the vehicle) in addition to operation of pressing a switch or the like;" ¶: 0018].
With respect to claim 12, Shintani discloses: “The method as claimed in claim 1, wherein evaluating information about the state of the first passenger encompasses at least information concerning the tiredness, emotion, mood, excitation, stress, attentiveness, intoxication, age and/or sex of the first passenger” [Shintani; "In accordance with information including a conversation between occupants of the vehicle X, the excitement determination unit 421 determines whether or not the feeling or the atmosphere of occupants in the vehicle X corresponds to excitement (FIG. 5, STEP 106). This process corresponds to a primary determination process for determining the presence or absence of excitement;" Fig. 5; ¶: 0038;].
With respect to claim 13, Shintani discloses: “The method as claimed in claim 2, wherein evaluating information about the state and/or behaviour of any further passengers encompasses at least information about the behavior of the further passengers and/or about an atmosphere between the passengers” [Shintani; "The information generating unit 430 determines whether or not the second feeling of the occupant to the target keyword estimated by the feeling estimation and determination unit 4211 corresponds to affirmation (sympathy or the like) (FIG. 5, STEP 116). When it is determined that the second feeling of the occupant does not correspond to affirmation such as corresponding to denial (FIG. 5, STEP 116, NO), the process on and after the determination of presence or absence of excitement is repeated (see FIG. 5, STEP 106 to STEP 116). On the other hand, when it is determined that the second feeling of the occupant corresponds to affirmation (FIG. 5, STEP 116, YES), the information generating unit 430 acquires information associated with the target keyword (FIG. 5, STEP 118). Such information may be searched from an external information source each time. In this case, the external information frequently obtained (automatically transmitted) from the external information source may be temporarily stored in the storage unit 13 (23), and information may be selected therefrom. The information generating unit 430 outputs this information via at least one of the display unit 15 (25) and the audio output unit 17 (27) (FIG. 5, STEP 120). This output information is provided as “information suitable for a content of a conversation between occupants of the vehicle X” or “information suitable for an atmosphere of occupants of the vehicle X”;" Fig. 5; ¶: 0044; See also: ¶: 0035-0043, 0045, 0046].
With respect to claim 14, Shintani discloses: “The method as claimed in claim 1,further comprising performing at least one of the following actions: recording the passenger compartment of the motor vehicle by a 2D or 3D camera or radar or lidar, recording sounds and/or ascertaining gases in the passenger compartment of the motor vehicle” [Shintani; "The in-vehicle state information acquisition unit 412 acquires in-vehicle state information. A motion image which indicates movement of an occupant (in particular, a fellow passenger or a secondary passenger (a second occupants of the driver (the first occupant) of the vehicle X) captured by the image capturing unit 191 (291) such as a view of closing the eyes, a view of looking out of the window, a view of operating a smartphone, or the like may be acquired as in-vehicle state information. A content of a conversation between the first occupant and the second occupant or an utterance of the second occupant sensed from the audio input unit 192 (292) may be acquired as occupant information;" ¶: 0030].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
Claim(s) 6 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shintani in view of Bates et al. (United States Patent Publication 2021/0061471 A1), referenced as Bates moving forward.
With respect to claim 6, while Shintani discloses: “The method as claimed in claim 2” [Shintani; See above rejection under 35 U.S.C. § 102(a)(1) pertaining to claim 2], Shintani does not specifically state: “wherein preferences of the first passenger and/or of the further passenger(s) for the information and/or entertainment is input.”
Bates, which is in the same field of invention of systems/methods for controlling information/entertainment systems, teaches: “wherein preferences of the first passenger and/or of the further passenger(s) for the information and/or entertainment is input” [Bates; "In some embodiments, the predicted preference listing is based on past purchases, interests, and other available collected or stored information (e.g., from third-party applications or databases) of the passenger. When the passenger makes a selection or requests one of the goods or services on the predicted preference listing, the server system 140 records the selection or request, and updates the predicted preference listing;" ¶: 0020;
"In some embodiments, the method 400 further includes the steps of collecting, during the travel segment, passenger behavior data and information from the interactive session; and updating, at an end of the travel segment and using a machine learning/neural network, the data for predictive preference selection for a future travel segment based on the collecting. For example, the predictive preference selection (and/or traveler profile) that was initially developed based on only “static” or non-interactive information can now be updated based on interactions that the passenger has had on their first journey with the commercial carrier;" Fig. 4; ¶: 0055].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the information providing system/method as disclosed by Shintani to incorporate the teachings regarding customizing a personalized display menu for a passenger based on previously gathered user preferences as taught by Bates with a reasonable expectation of success. By combining these inventions, the outcome is a system/method for providing information to a passenger of a vehicle that is more robust in its ability to provide a personalized display menu for a particular passenger without having to necessarily develop a customized (or personalized or individualized) vehicle entertainment system, which are known in the industry to increase development and deployment times, which is undesirable in the field. This in particular has benefits in the context of commercial passenger vehicles in which many passengers may interact with the same display menu on their respective trips [Bates; ¶: 0015-0017].
With respect to claim 9, while Shintani discloses: “The method as claimed in claim 8” [Shintani; See above rejection under 35 U.S.C. § 102(a)(1) pertaining to claim 8], Shintani does not specifically state: “wherein distances travelled are stored and are taken into consideration by a memory reading step during subsequent selection of information and/or entertainment.”
Bates teaches: “wherein distances travelled are stored and are taken into consideration by a memory reading step during subsequent selection of information and/or entertainment” [Bates; "In one exemplary aspect, a method implemented in a commercial passenger vehicle includes storing from a server system communication on a memory and a display screen of a portable mobile device (PED) and/or a portable screen monitor (PCM) of the passenger prior to a start of a current travel segment of the commercial passenger vehicle, data for predictive preference selection during the current travel segment; determining, during the current travel segment, a personalized display menu for the passenger of one or more items or services from a plurality of items and services, wherein the determining is based on a traveler profile of the passenger that comprises at least one of biographic or demographic information for the passenger, a duration of a previous travel segment or the current travel segment, an origin or a destination of the previous travel segment or the current travel segment, a seat class in the previous travel segment or the current travel segment, a mileage membership status and the data for predictive preference selection; and providing, during the travel segment, an interactive session having a personalized display menu based on the traveler profile on the display screen of the PED and/or the PCM to the passenger based on the determining;" ¶: 0004].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the information providing system/method as disclosed by Shintani to incorporate the teachings regarding customizing a personalized display menu for a passenger based on previously gathered user preferences as taught by Bates with a reasonable expectation of success. By combining these inventions, the outcome is a system/method for providing information to a passenger of a vehicle that is more robust in its ability to provide a personalized display menu for a particular passenger without having to necessarily develop a customized (or personalized or individualized) vehicle entertainment system, which are known in the industry to increase development and deployment times, which is undesirable in the field. This in particular has benefits in the context of commercial passenger vehicles in which many passengers may interact with the same display menu on their respective trips [Bates; ¶: 0015-0017].
Prior Art (Not relied upon)
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure can be found in the attached form 892.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAMI N BEDEWI whose telephone number is (571)272-5753. The examiner can normally be reached Monday - Thursday - 6:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott A. Browne can be reached on (571-270-0151). The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.N.B./Examiner, Art Unit 3666C
/SCOTT A BROWNE/Supervisory Patent Examiner, Art Unit 3666