DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Application
Claims 3-9 are pending. Claims 1-2 has been cancelled. Claims 3-9 have been newly added. Claim 3 is the independent claim. This office action is in response to the amendments received on 11/06/2025.
Response to Arguments
With respect to applicant’s Remarks filed on 11/06/2025, “Applicant Arguments/Remarks Made in an Amendment” have been fully considered. Applicant’s remarks will be addressed in sequential order as they were presented.
With respect to the applicant’s Remark, objection to drawings has been withdrawn.
In response to the amended claims files on 11/06/2025, the rejections of claim 1 under 35 U.S.C § 112(b) is withdrawn as claim 1 is cancelled. Also, with respect to the amended claims, the interpretation of claims under 35 U.S.C § 112 (f) has been withdrawn.
Applicant's arguments according to the Applicant’s Remarks filed on 11/06/2025, See page 7-37, section “The Claim Rejections under 35 U.S.C § 103”, with respect to the rejections of claims 1-2 (now cancelled) and claims 3-9 (newly added), have been fully considered, but they are not, respectfully, persuasive.
Applicant’s arguments, mainly refers to six categories. First, applicant argues that Sultanoğlu fails to disclose the method step of synchronizing lip and mouth movements with speech output from the system (Remarks, page 8, last paragraph). The argument is not, respectfully, persuasive. According to the last paragraph of Section DETAILED DESCRIPTION OF THE INVENTION, Sultanoğlu discloses “The assistant, which is the subject of the invention, has a three-dimensional holographic face image. Here, the desired face type can be selected, as well as the real three-dimensional face of someone known as the assistant's face or a character can be determined. By choosing the face of a loved one or the face of a lost relative, the feeling of speaking with that person can be created. When the assistant speaks, his lips can move according to the word he speaks, thus making it feel like a real person is speaking. Instead of the three-dimensional hologram screen in the inventive system, two-dimensional screens available in vehicles can also be used, and thus a more cost-effective structure can be obtained.”. Therefore, Sultanoğlu teaches or suggests synchronizing lip and mouth movements with speech output from the system.
Second, applicant argues that no prior of record teaches integrating all of the limitations and the combination would be non-obvious, See Remarks, Page 19 last paragraph and Page 20. The argument is not, respectfully, persuasive. Applicant is reminded that one cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Third, applicant argues, See last paragraph on page 23, that the stated motivation to combine the prior arts relied upon address a different problem than it solved by the claimed invention. Applicant argues that the stated motivation in the office action focuses on functional task completion through voice commands, while the claimed invention solves the problem of creating emotional connection between user and vehicle. The argument is not, respectfully, persuasive. First, this is the office stance that the claimed improvement of creating human connection is a result of functioning task. Therefore, while various individual design choices/limitations are obvious to use for other reasons then just because that specific combination results in more human connection, that doesn't make it suddenly non-obvious or patentable even if the prior art individually/as a whole is mute as to this improvement. According to MPEP ¶ 7.37.07, the fact that the inventor has recognized another advantage which would flow naturally from following the suggestion of the prior art cannot be the basis for patentability when the differences would otherwise be obvious. See Ex parte Obiaya, 227 USPQ 58, 60 (Bd. Pat. App. & Inter. 1985). Second, the primary reference of record, also discloses creating an emotional bond between the user and vehicle, See Sultanoğlu, Abstact and section Prior ART, first two paragraphs. Third, above all these, according to Non-Final Office action filed on 08/13/2025, See Page 29, first paragraph, the motivation to combine the prior arts of records to arrive at the claimed invention is recited as “with the motivation of establishing a connection between the user and the car in a way that user feels to have someone to help with his/her task which enhance the user experience in car by at least fulfilling some possible requests while driving and saving time.”. Therefore, the applicant’s argument is moot.
Fourth, on Page 28 second paragraph, applicant argues that all of the prior arts relied upon as well as the automotive industry, teach away from systems that require drivers to engage visually with displays while operating the vehicles and instead emphasize audio-only interaction so that drivers need not look at displays. Accordingly, applicant argues that the claim explicitly recited that the visual element is provided “to provide the user with an experience of conversing with a human-like presence”, See Page 28, last paragraph. However, nowhere in the claim says that the claimed invention requires looking at the display to see the synchronized facial movements. In fact, the driver can have his/her loved one on passenger seat and converse with him/her without looking at his/her face, therefore, while emotional connection (recited motivation) is there, the driver is not required to look at his/her face. Similarly, the claimed limitation of “moves lips and mouths in synchronization with speech output from the system” does not limit the driver to look at the display in order to arrive at the aforementioned goal of creating connection between user and vehicle. This is the office stance that the argument is not persuasive.
Fifth, On page 30, last paragraph, applicant argues that substantial differences exist between the teachings of the prior arts and the claimed invention since the prior arts are directed to functional voice control for task completion, whereas the claimed invention is directed to emotional connection through human-like interaction. The office respectfully disagrees because the primary reference, Sultanoğlu’s disclosure also directs to emotional connection, and it would be obvious o combine the Sultanoğlu with the other prior arts of the records relied upon to arrive at the claimed invention with that motivation. Furthermore, term “emotional connection” as recited in the newly added claim 8, renders the metes and bound of the claim to be unclear (See rejection of claim 8 under 35 U.S.C § 112 (b) in office action below)
Sixth, On page 31, second paragraph, applicant argues that Graham’s temporal condition detection is based on if-then rules and not analyzing the behavioral frequency of a task, however, the argument is not, respectfully, persuasive. Graham, according to paragraph [0027], teaches claimed predetermined frequency-based pattern recognition. Applicant mainly argues that Graham’s invention detects an immediate relationship between a present condition and a behavior, and then automated that behavior whenever condition recurs, while, the claimed method detects repetition with a predetermined frequency and track how often a particular behavior occurs and automated that action if it happens for a predetermined number of times (predetermined frequency). However, according to paragraph [0027], Graham teaches, “The number of times that a driver must repeat a behavior before it is learned is preset.”, and further according to paragraph [0036], Graham teaches “in other embodiments the behavior must simply be repeated with a certain frequency, i.e., more often than not”. Therefore, Graham teaches the claimed limitation and the rejection of the limitation as being unpatentable over Graham is maintained (See office action below).
Office Note: Due to applicant’s amendments, further claim rejections appear on the record as stated in the below Office Action.
It is the Office’ stance that all of applicant arguments have been considered.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 8 recites the limitation of “the AI-based chat service generates contextually relevant conversational responses to create an emotional connection between the user and the vehicle.”. It is not clear to the examiner how to measure whether an emotional connection has been established or not. In fact, while generating contextually relevant conversational responses can be examined, the emotional connection cannot be measured. Therefore, the mete and bound of the claim is not definite. For the purpose of examination, an under the broadest reasonable interpretation, it is assumed that any contextually relevant responses that are made, create an emotional connection because the applicant did not clearly define what constitutes creating an emotional connection between human and device.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3-9 are rejected under 35 U.S.C. 103 as being unpatentable over Sultanoğlu, TR 2021019768 A2, hereinafter “Sultanoğlu”, in view of Watanabe, US 20220315010 A1, hereinafter “Watanabe” , further in view of Ramaci, WO2019213177A1, hereinafter “Ramaci”, further in view of Gandhi et al., US20050100142, hereinafter “Gandhi” and further in view of O'Brien et al., US 11410218 B1, hereinafter “O’Brien” or, in alternative, Nadkar, US11164570, hereinafter “Nadkar”, and further in view of Graham, US20150353038, hereinafter “Graham”.
Regarding claim 3, Sultanoğlu discloses A method of operating a smart vehicle assistance system with artificial intelligence (Abstract, “smart car assistant with artificial intelligence”), the method comprising: capturing visual data of a face of a user when the user enters a vehicle (Section OBJECTIVE OF THE INVENTION, third para, “a smart vehicle assistant that recognizes the user's face”, Section DETAILED DESCRIPTION OF THE INVENTION, third para, “At least one camera to recognize the face of the user”); performing image processing on the visual data to recognize the user by (Section DETAILED DESCRIPTION OF THE INVENTION, Sixth para, “the image is processed by the camera inside, and the face of the person is recognized by the system, and the emotional state of the person is also detected.”); providing an audible personalized greeting addressing the user by name upon recognition of the user (Section DETAILED DESCRIPTION OF THE INVENTION, Sixth para, “when the owner of the vehicle gets into the vehicle, the person is recognized and said, "Welcome... Sir/Madam,”); identifying emotional expressions on the face of the user through emotion recognition processing (Section DETAILED DESCRIPTION OF THE INVENTION, Sixth para, “the image is processed by the camera inside, and the face of the person is recognized by the system, and the emotional state of the person is also detected.”, Claims section, first para, “detecting the user's emotional state”), providing an audible suggestion based on the identified emotional expression (Section DETAILED DESCRIPTION OF THE INVENTION, Sixth para, “According to the detected emotion, suggestions are offered to the person by the system. For example, when the owner of the vehicle gets into the vehicle, the person is recognized and said, "Welcome... Sir/Madam, you look unhappy today, would you like me to make you coffee?”); continuously monitoring for an occurrence of a wake-up word (Section DETAILED DESCRIPTION OF THE INVENTION, Sixth para, “The user uses “Hey Dizzy” or a similar wake-up word before speaking to the system.”); activating voice command processing upon detecting the wake-up word (Section DETAILED DESCRIPTION OF THE INVENTION, third para, “microphone to detect the voice commands and questions directed by the user”); displaying an assistant image on a display, wherein the assistant image comprises one of a two-dimensional image or a three-dimensional holographic image (Section DETAILED DESCRIPTION OF THE INVENTION, second para, “three-dimensional hologram screen to give the assistant a three-dimensional face”, Section Claims, last para “two-dimensional screen to give the assistant a face”), and wherein the assistant image moves lips and mouth in synchronization with speech output from the system to provide the user with an experience of conversing with a human-like presence (Section DETAILED DESCRIPTION OF THE INVENTION, last para, “The assistant, which is the subject of the invention, has a three-dimensional holographic face image. Here, the desired face type can be selected, as well as the real three-dimensional face of someone known as the assistant's face or a character can be determined. By choosing the face of a loved one or the face of a lost relative, the feeling of speaking with that person can be created. When the assistant speaks, his lips can move according to the word he speaks, thus making it feel like a real person is speaking.”); receiving a voice command upon detection of the wake-up word(Section DETAILED DESCRIPTION OF THE INVENTION, Sixth para, “The user uses “Hey Dizzy” or a similar wake-up word before speaking to the system.”, and Section DETAILED DESCRIPTION OF THE INVENTION, third para, “microphone to detect the voice commands and questions directed by the user”);
Sultanoğlu doesn’t explicitly disclose performing image processing on the visual data to recognize the user by comparing the visual data with photos in a database, and converting the voice command from speech to text.
However, Watanabe US 20220315010 A1 teaches performing image processing on the visual data to recognize the user by comparing the visual data with photos in a database ([0168]).
However, Ramaci teaches converting the voice command from speech to text (at least Page 8, Line 8-9, “a speech-to-text (STT) processing module”, __ A speech-to-text (STT) processing module converts spoken language into written text__, and at least Page 7, Line 42-43, “natural language processing and predictive algorithms”); interpreting the text through natural language processing to determine a requested action (at least Page 7, Line 42-43, “natural language processing and predictive algorithms”, Page 22, Lines 30-33, “any information processing system that interprets natural language input in spoken and/or textual form to infer user intent, and performs actions based on the inferred user intent.”);
Further, Sultanoğlu teaches performing the requested action by executing at least one of:(a) controlling vehicle equipment by sending control signals via a vehicle communication protocol, wherein the vehicle equipment is selected from the group consisting of lighting, seats, windows, doors, and combinations thereof (Section DETAILED DESCRIPTION OF THE INVENTION, fourth para, “. Electronic motherboard and electronic board containing motor drivers communicate via Ethernet connection and CANBUS protocol. The electronic card, which receives data from the electronic motherboard with the CANBUS protocol, uses this data and the CANBUS protocol to allow the operation of the seats, [] lights and similar equipment in the vehicle.”, section Claims, “at least one electronic card containing motor drivers to control the equipment inside the vehicle”); (b) receiving at least one of vehicle malfunction codes from an OBD system, fuel range data, and battery charge level data from a vehicle system via a vehicle communication protocol (Abstract, “can transmit malfunctions and range information that may occur in the vehicle”, Section TECHNICAL FIELD, first para, “A smart tool with artificial intelligence that can [] tell the vehicle's faults,”, Section OBJECTIVE OF THE INVENTION, eight para, “ensure that the malfunctions that may occur in the vehicle and also the amount of fuel or battery remaining can be told to the user audibly how far can be traveled.”, Section DETAILED DESCRIPTION OF THE INVENTION, fourth para “communicate via Ethernet connection and CANBUS protocol.” and fifth para),
Sultanoğlu doesn’t disclose converting the received data to speech for audible communication to the user;
However, Ramaci teaches converting the received data to speech for audible communication to the user; (Ramaci, at least page 7, Lines 27, “text-to- speech (TTS) processing/services”, at least Page 7, second para, “receive, from the voice translation service, a verbal response to the user’s voice command, and to broadcast the response through the speaker.”, Fig. 4).
Further, Sultanoğlu discloses: (c) initiating a chat interaction with an AI-based chat service (Section DETAILED DESCRIPTION OF THE INVENTION, seventh para), (Section OBJECTIVE OF THE INVENTION, sixth para, “has the ability to answer the questions asked.”, section DETAILED DESCRIPTION OF THE INVENTION, last two paragraphs), querying at least one of an online search engine and an online library platform for an answer (section DETAILED DESCRIPTION OF THE INVENTION , seventh para, “question is searched by the software in the search engine and Wikipedia,”), (Section OBJECTIVE OF THE INVENTION, Seventh para, “translated into the desired language”, and section Claims, first para, “able to translate in different languages” ), Section Claims, “in Turkish or the desired language, able to translate in different languages,”), (Abstract and Section OBJECTIVE OF THE INVENTION, fifth para, “features such as reading news” ); (g) accessing an email service (e.g., section Claims, first para, “sending e-mails, creating notes and recording alarms, allowing voice control of the equipment in the vehicle”),
Sultanoğlu doesn’t explicitly disclose:
conducting a conversational exchange with the user through text-to-speech and speech-to-text conversion,
and providing the answer audibly to the user,
and providing the translation audibly to the user.
However, Ramaci teaches conducting a conversational exchange with the user through text-to-speech and speech-to-text conversion (Page 7, last paragraph, “one or more remote cloud-based servers configured to perform speech-to-text (STT) and text-to-speech (TTS) conversion services, wherein the wireless communication device and the conversion services located on the one or more remote cloud-based servers constitute serve as an Artificial Intelligence 40 (AI) assistant, the AI assistant providing conversational interactions with a user”), and providing the answer audibly to the user (Page 32, “generated dialogue response can be in the form of a text string that speech synthesis module 411 can convert to an audible speech output”), and providing the translation audibly to the user (at least Page 7, second para, “receive, from the voice translation service, a verbal response to the user’s voice command, and to broadcast the response through the speaker.”, Fig. 4).
Sultanoğlu doesn’t explicitly discloses performing at least one of reading incoming emails audibly to the user and sending an email based on voice-dictated content from the user; or (h) accessing a shopping service, receiving a voice-specified product request, searching for the product, and completing a purchase; recording user actions and preferences over time in persistent storage; detecting when recorded user actions repeat with a predetermined frequency across multiple vehicle entries; and automatically implementing settings based on the detected repetitive user actions when the user subsequently enters the vehicle.
However, Gandhi teaches performing at least one of reading incoming emails audibly to the user (at least, [0008], “allowing access to Web portals and other services such as electronic mail”, [0011], “the method can include receiving at least one additional user spoken utterance and converting the additional user spoken utterance to text. Notably, the formatting step can build an electronic mail to be sent in the sending step.”,) and sending an email based on voice-dictated content from the user ([0035], “The service engine 220 can format the text of IM's, electronic mails,”, “The service engine 215 also can annotate text that is being provided to the TTS engine 215 to control the manner in which the text is to be read or played to the user.”).
However, either O'Brien or Nadkar individually or in combination teaches, accessing a shopping service, receiving a voice-specified product request, searching for the product, and completing a purchase. O'Brien, according to at least Fig, 9, and Col 8, second para), discloses a Smart Shopping Assistant which can be used in vehicle where a user can have interaction with voice command, and do shopping through voice command, where a computing device receive requests from a user, Col 8, second para, searching for the product (e. g. Col 1, Lines 38-40, Col 10, last para) and completing a purchase. Also, Nadkar, US11164570, according to Col 1, last para and Col 2, lines 1-3, “a second voice assistant may be better at adding items to shopping list.”, discloses voice assistant tracking system used in vehicle and teaches a voice assistant to add items to shopping through voice command. According to the cited paragraphs of the aforementioned prior arts, either of these references individually or in combination teaches the claimed limitation.
However, Graham teaches recording user actions and preferences over time in persistent storage; detecting when recorded user actions repeat with a predetermined frequency across multiple vehicle entries; and automatically implementing settings based on the detected repetitive user actions when the user subsequently enters the vehicle (at least Abstract, “When the control system determines that the identified driver repeats the same behavior in response to the same temporal conditions, the controller learns that behavior and associates it with the identified driver so that it can be automatically performed, without driver interaction, under the same temporal conditions in the future.”, [0009], [0027], “The number of times that a driver must repeat a behavior before it is learned is preset.”, [0036], “in other embodiments the behavior must simply be repeated with a certain frequency, i.e., more often than not”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include the smart vehicle assistant with artificial intelligence as taught by Sultanoğlu with performing step of recognizing the user by comparing the visual data with photos in a database as taught by Watanabe, and further with using the well-known speech-to-text and text-to-speech and natural language processing techniques, for processing the voice commands, interpreting text (converted from voice command), and audibly output responses or received data to the user as, for example, taught by Ramaci, and further include modified Sultanoğlu system with different artificial intelligence modules including module to perform known AI action like performing reading incoming emails audibly to the user and sending emails based on voice-dictated content fro the user as also taught by Gandhi, and an online shopping service assistant as taught by O'Brien or Nadkar, and further modified it to include the feature of automatically implementing settings based on the detected repetitive user actions as taught by Graham, with a reasonable expectation of success, with the motivation of establishing a connection between the user and the car (as also disclosed by Sultanoğlu), in a way that user feels to have someone to help with his/her task which enhance the user experience in car by at least fulfilling some possible requests while driving and saving time.
Regarding claim 4, Modified Sultanoğlu teaches the method of claim 3, and teaches wherein the emotion recognition processing identifies at least one of happiness, sadness, and anger based on facial features of the user (Section DETAILED DESCRIPTION OF THE INVENTION, third para, “one camera to recognize the face of the user and to detect the happiness, sadness, anger and similar feelings of the user”, Section DETAILED DESCRIPTION OF THE INVENTION, sixth para, “the image is processed by the camera inside, and the face of the person is recognized by the system”).
Regarding claim 5, Modified Sultanoğlu teaches the method of claim 3, and teaches wherein displaying the assistant image comprises displaying a three-dimensional holographic image (Section DETAILED DESCRIPTION OF THE INVENTION, second para, “three-dimensional hologram screen to give the assistant a three-dimensional face”).
Regarding claim 6, Modified Sultanoğlu teaches the method of claim 3, and although Sultanoğlu teaches perceiving the commands given in daily and natural speaking language (See at least, Abstract and section OBJECTIVE OF THE INVENTION, sixth para, section DETAILED DESCRIPTION OF THE INVENTION, second para, “erceive commands given in everyday and natural speaking language”), however, Sultanoğlu doesn’t explicitly discloses wherein interpreting the text through natural language processing comprises processing natural language input in a daily speaking language to determine user intent.
Nevertheless, Ramaci teaches wherein interpreting the text through natural language processing comprises processing natural language input in a daily speaking language to determine user intent (at least Page 8, Line 8-9, “a speech-to-text (STT) processing module”, __ A speech-to-text (STT) processing module converts spoken language into written text__, and at least Page 7, Line 42-43, “natural language processing and predictive algorithms”, Page 9, Lines 8-10, “the voice-controlled speech user interface of said device detects or monitors audio input/output and interacts with a user to determine a user intent based on natural language understanding”, Page 22, Lines 29-33, “interprets natural language input in spoken and/or textual form to infer user intent,”, Page 31, last para).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include the smart vehicle assistant with artificial intelligence as taught by modified Sultanoğlu with using the well-known technique of interpreting text through natural language processing and speech-to-text and text-to-speech techniques, for processing natural language input in a daily speaking language as, for example, taught by Ramaci, with a reasonable expectation of success, with the motivation of establishing a connection between the user and the car (as also disclosed by Sultanoğlu), in a way that user can have a human-like interaction with the vehicle which enhance the user experience in car by at least fulfilling some possible requests while driving and saving time.
Regarding claim 8, Modified Sultanoğlu teaches the method of claim 3, and it teaches wherein the AI-based chat service generates contextually relevant conversational responses (Sultanoğlu, section Prior Art, fifth para, section OBJECTIVE OF THE INVENTION, sixth para, section DETAILED DESCRIPTION OF THE INVENTION, second para), to create an emotional connection between the user and the vehicle (Section PRIOR ART, fifth para, “There is a need for an intelligent vehicle assistant that will establish an emotional bond with the vehicle used,”, section OBJECTIVE OF THE INVENTION, four para).
Although Sultanoğlu discloses wherein the AI-based chat service generates contextually relevant conversational responses (previous paragraph), however, for the purpose of compact prosecution, Ramaci also more explicitly discloses wherein the AI-based chat service generates contextually relevant conversational responses (Page 7 last para, “a vehicle telematics system comprising a wireless communication device,[] serve as an Artificial Intelligence 40 (AI) assistant, the AI assistant providing conversational interactions with a user []”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include the smart vehicle assistant with artificial intelligence as taught by modified Sultanoğlu with AI-based chat service to generate contextual conversation as for example taught by Ramaci, with a reasonable expectation of success, with the motivation of establishing a connection between the user and the car (as also disclosed by Sultanoğlu), in a way that user can have a human-like interaction through generating contextually conversational interaction with the vehicle which enhance the user experience in the vehicle and creating emotional bond as well as performing tasks like answering questions.
Regarding claim 9, Modified Sultanoğlu teaches the method of claim 3, however, Sultanoğlu doesn’t explicitly teach wherein automatically implementing settings comprises at least one of adjusting vehicle lighting, adjusting seat position, and adjusting climate control based on the detected repetitive user actions.
However, Graham teaches wherein automatically implementing settings comprises at least one of adjusting vehicle lighting, adjusting seat position, and adjusting climate control based on the detected repetitive user actions (at least Abstract, “When the control system determines that the identified driver repeats the same behavior in response to the same temporal conditions, the controller learns that behavior and associates it with the identified driver so that it can be automatically performed, without driver interaction, under the same temporal conditions in the future.”, [0009], [0022], [0038], and Figs. 1 and 4).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the smart vehicle assistant with artificial intelligence as taught by modified Sultanoğlu to include the feature of automatically implementing settings of at least one of vehicle lighting, adjusting seat position and climate control, based on the detected repetitive user actions as taught by Graham, with a reasonable expectation of success, with the motivation increasing at least the comfort of the driver while establishing a connection between the user and the car (as also disclosed by Sultanoğlu).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Sultanoğlu, in view of Watanabe , further in view of Ramaci, further in view of Gandhi, further in view of O'Brien et al., US 11410218 B1, hereinafter “O’Brien” or, in alternative, Nadkar, further in view of Graham, and further in view of Yamasaki et al., US 20200320992 A1, hereinafter “Yamasaki”.
Regarding claim 7, Modified Sultanoğlu teaches the method of claim 3, however, it doesn’t teach further comprising applying noise cancellation processing to audio input to isolate a voice of the user from environmental noise within the vehicle.
Nevertheless, Yamasaki teaches further comprising applying noise cancellation processing to audio input to isolate a voice of the user from environmental noise within the vehicle ([0022], “the AI-based filtering module processes the microphone system input to isolate the voice commands (e.g. with reduced distortion) by canceling the environmental (and other) noise as well as other non-relevant conversations. Finally, the isolated voice commands are processed by a speech recognition module to specifically identity the requests of the one or more passengers.”, [0023]- [0024], “enabling noise cancellation in the dynamic microphone system”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include the smart vehicle assistant with artificial intelligence as taught by modified Sultanoğlu with noise cancellation step as taught by Yamasaki, in order to increase the accuracy of the voice recognition system with not being distracted by other noise, with a reasonable expectation of success, with the motivation of establishing a connection between the user and the car (as also disclosed by Sultanoğlu), with improving the drive’s comfort and deriving experience.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAJAR HASSANIARDEKANI whose telephone number is (571)272-1448. The examiner can normally be reached Monday thru Friday 8 am-5 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached at 5712707429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.H./Examiner, Art Unit 3669
/Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669