NON-FINAL REJECTION, FIRST DETAILED ACTION
Status of Prosecution
The present application 18/169,458, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The application was filed on February 15, 2023 and claims priority to Japanese application JP2021-021703 filed on Feb. 16, 2022.
Claims 1-12 are pending and is rejected. Claims 1, 11 and 12 are independent.
Status of Claims
Claims 1, 4, 8 and 11-12 are rejected under 35 USC § 103 as being unpatentable by Christensen, United States Patent Application Publication 2014/0025287 published on Jan. 23, 2014 in view of Ichimura et al. (“Ichimura”), United States Patent Application Publication 2019/0179147 published on June 13, 2019.
Claims 2-3 are rejected under 35 USC § 103 as being unpatentable by Christensen, United States Patent Application Publication 2014/0025287 published on Jan. 23, 2014 in view of Ichimura et al. (“Ichimura”), United States Patent Application Publication 2019/0179147 published on June 13, 2019 in further view of Kim et al. (“Kim”) Korean Patent Application Publication KR101754304B1, published on August 7, 2012.
Claims 5, 6 and 10 are rejected under 35 USC § 103 as being unpatentable by Christensen, United States Patent Application Publication 2014/0025287 published on Jan. 23, 2014 in view of Ichimura et al. (“Ichimura”), United States Patent Application Publication 2019/0179147 published on June 13, 2019 in further view of Querze III et al. (“Querze”) United States Patent Application Publication 2020/0314524, published on Oct. 1, 2020.
Claim 7 is rejected under 35 USC § 103 as being unpatentable by Christensen, United States Patent Application Publication 2014/0025287 published on Jan. 23, 2014 in view of Ichimura et al. (“Ichimura”), United States Patent Application Publication 2019/0179147 published on June 13, 2019 in further view of Kim et al. (“Kim 2015”) United States Patent Application Publication 2015/0199848, published on July 16, 2015.
Claim 9 is rejected under 35 USC § 103 as being unpatentable by Christensen, United States Patent Application Publication 2014/0025287 published on Jan. 23, 2014 in view of Ichimura et al. (“Ichimura”), United States Patent Application Publication 2019/0179147 published on June 13, 2019 in further view of non-patent literature A. Benoit et al. (“Benoit”), “Head Nods Analysis: Interpretation of Non-Verbal Communication Gestures,” published in 2005 and in further view of Querze III et al. (“Querze”) United States Patent Application Publication 2020/0314524, published on Oct. 1, 2020.
Specification – Title
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
A.
Claims 1, 4, 8 and 11-12 are rejected under 35 USC § 103 as being unpatentable by Christensen, United States Patent Application Publication 2014/0025287 published on Jan. 23, 2014 in view of Ichimura et al. (“Ichimura”), United States Patent Application Publication 2019/0179147 published on June 13, 2019.
As to Claim 1, Christensen teaches: An information provision system configured to provide information by sound, the information provision system comprising:
a processor (Christensen: par. 0153, processor [80]); and, wherein the processor, causes the information provision system to perform operations the operations comprising:
acquiring position information indicating a position where a user is present and line-of-sight direction information indicating a line-of-sight direction corresponding to a direction in which a face of the user faces (Christensen: par. 0069, a GPS-unit is able to acquire position information of a user; par. 0192, the pitch, yaw of the head of the user (i.e. line of sight direction) of a viewing direction toward a point of interest (POI));
estimating a target visually recognized by the user based on the position information, the line-of-sight direction information, and target position information set in advance for each of a plurality of targets that are possible targets visually recognizable by the user (Christensen: Fig. 6, par. 0182-83, POI’s within a field of view and inside a first distance threshold are identified; par. 0195, the POI’s are stored in advance in a database);
PNG
media_image1.png
816
554
media_image1.png
Greyscale
outputting, by sound, description information about the target (Christensen: par. 0181, the user may request the outputting of spoken presentation related to a POI by pressing a button).
Christensen may not explicitly teach: a memory storing instructions that, when executed by the processor, cause the information provision system to perform operations,
outputting, by sound, description information about the target in accordance with a setting related to information provision;
detecting a motion of a head of the user;
estimating an intention of the user based on the motion of the head of the user during output of the description information;
selecting the setting in accordance with the intention of the user; and
outputting, in response to change of the setting, the description information in accordance with the setting after the change.
Ichimura teaches in general concepts related to a head-mounted display that allows for communication of intentions with head movement detection (Ichimura: Abstract). Specifically, Ichimura teaches that a sound volume may be lowered (i.e. a setting change) based on specific conditions including when the user head movement is detected in a certain direction (Ichimura: par. 0154). Ichimura also teaches a computer readable medium that carries instructions that cause a processor to perform the functions (Ichimura: par. 0070).
It would have been obvious and predictable to a person having ordinary skill in the art a time before the effective filing date of the claimed invention to have modified the Christensen device by including computer instructions to allow for the adjustment of volume levels as taught and suggested by Ichimura. Such a person would have been motivated to do so to allow for the ease of using intuitive user interface control of head movements.
As to Claim 4, Christensen and Ichimura teach the limitations of claim 1.
Ichimura further teaches: wherein the setting includes setting information related to sound output (Ichimura: par. 0154, sound volume (i.e. sound output)).
As to Claim 8, Christensen and Ichimura teach the limitations of claim 1.
Christensen further teaches: wherein the operations further comprise:
acquiring a virtual position of a sound source corresponding to each of the plurality of targets, outputting, from a portable sound output device mountable on the head of the user, and
in accordance with a virtual position of the sound source as viewed from a current position of the user, sound obtained by performing a stereophonic sound process on sound representing the description information (Chirstensen: par. 0192, for multiple POI’s, the apparent direction of the sound source is emitted for the played audio information).
As to Claim 11, it is rejected for similar reasons as claim 1. Christensen further teaches that the computer is carriable by a user (Christensen: par. 0035, carriable on the user’s head as a headset).
As to Claim 12, it is rejected for similar reasons as claim 1 and 11.
B.
Claims 2-3 are rejected under 35 USC § 103 as being unpatentable by Christensen, United States Patent Application Publication 2014/0025287 published on Jan. 23, 2014 in view of Ichimura et al. (“Ichimura”), United States Patent Application Publication 2019/0179147 published on June 13, 2019 in further view of Kim et al. (“Kim”) Korean Patent Application Publication KR101754304B1, published on August 7, 2012.
As to Claim 2, Christensen and Ichimura teach the limitations of claim 1.
Christensen and Ichimura may not explicitly teach: wherein the description information includes first description information that is a description for the plurality of targets and second description information that is a description for the plurality of targets different from the first description information, and
wherein the setting includes information indicating which of the first description information and the second description information is selected as the description information.
Kim teaches in general concepts related to providing specialized information for points of information (Kim: Abstract). Specifically, for each POI, brief, basic or additional information (i.e. different levels of detail of information) are output for a specific POI (Kim: par. 0079). User input may be applied to select the level of detail of information for a specific POI (Kim: par. 0053, according to a user selection).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Christensen-Ichimura disclosures and teachings by including computer instructions to allow for the different types of information for each POI as taught by Kim. Such a person would have been motivated to do so with a reasonable expectation of success to allow for the user to reduce cognitive burden in different contexts for the POI’s of interest.
As to Claim 3, Christensen, Ichimura and Kim teach the limitations of claim 2.
Kim further teaches: wherein the description information further includes third description information that is a description for the plurality of targets different from the first description information and the second description information, wherein the first description information is a normal description for the plurality of targets, the second description information is a description more detailed than the first description information (Kim: par. 0079, there is brief, basic or additional)), and
the third description information is a description simpler than the first description information, and wherein the setting includes information indicating which of the first description information, the second description information, and the third description information is selected as the description information (Kim: par. 0079, there is brief, basic or additional; par. 0053, according to a user selection).
C.
Claims 5, 6 and 10 are rejected under 35 USC § 103 as being unpatentable by Christensen, United States Patent Application Publication 2014/0025287 published on Jan. 23, 2014 in view of Ichimura et al. (“Ichimura”), United States Patent Application Publication 2019/0179147 published on June 13, 2019 in further view of Querze III et al. (“Querze”) United States Patent Application Publication 2020/0314524, published on Oct. 1, 2020.
As to Claim 5, Christensen and Ichimura teach the limitations of claim 1.
Christensen and Ichimura may not explicitly teach: wherein the setting includes information indicating whether to continue output of the description information.
Querze teaches in general concepts related to controlling a wearable audio device with a personally attributed audio engine (Querze: Abstract). Specifically, Querze teaches that sensors may detect the movement of a user’s head and determine the gaze direction of a user (Querze: par. 0044). The detected motion may be used to activate an operating mode of the wearable audio device or modify playback of audio (Querze: par. 0045). Audio playback may be paused and then resumed via the control engine and therefore the head movement controls (Querze: par. 0097). The audio playback system may also prompt the user for playback (Querze: par. 0099, “the personally attributed audio engine 240 can prompt the user to initiate playback of the personally attributed audio playback for at least one additional user's wearable audio device that has entered that geographic location within a period.”).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Christensen-Ichimura disclosures and teachings by including computer instructions to allow for the prompting of continuing output of the description information as taught and suggested by Querze. Such a person would have been motivated to do so with a reasonable expectation of success to allow for better user experience with full control of the audio experience.
As to Claim 6, Christensen and Ichimura teach the limitations of claim 1.
Christensen and Ichimura may not explicitly teach: wherein the operations further comprise:
outputting a question for the user by sound; and
estimating an answer of the user to the question based on the motion of the head of the user.
Querze teaches in general concepts related to controlling a wearable audio device with a personally attributed audio engine (Querze: Abstract). Specifically, Querze teaches that sensors may detect the movement of a user’s head and determine the gaze direction of a user (Querze: par. 0044). The detected motion (i.e. estimating an answer) may be used to activate an operating mode of the wearable audio device or modify playback of audio (Querze: par. 0045). Audio playback may be paused and then resumed via the control engine and therefore the head movement controls (Querze: par. 0097). The audio playback system may also prompt the user for playback (i.e. outputting a question for the user by sound) (Querze: par. 0099, “the personally attributed audio engine 240 can prompt the user to initiate playback of the personally attributed audio playback for at least one additional user's wearable audio device that has entered that geographic location within a period.”).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Christensen-Ichimura disclosures and teachings by including computer instructions to allow for the prompting of continuing output of the description information as taught and suggested by Querze. Such a person would have been motivated to do so with a reasonable expectation of success to allow for better user experience with full control of the audio experience.
As to Claim 10, Christensen and Ichimura teach the limitations of claim 1.
Christensen and Ichimura may not explicitly teach: wherein the operations further comprise estimating the intention of the user by inputting, to a learned machine learning model, a parameter representing the motion of the head of the user, a moving speed of the user, a distance between the user and the target, and a relative angle of the user with respect to the target.
Querze teaches in general concepts related to controlling a wearable audio device with a personally attributed audio engine (Querze: Abstract). Specifically, Querze teaches that sensors may detect the movement of a user’s head and determine the gaze direction of a user (Querze: par. 0044). Machine lelearning may be used to refine the feedback logic of the audio engine (Querze: par. 0114).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Christensen-Ichimura disclosures and teachings by including computer instructions to use machine learning models for the intent detection as taught and suggested by Querze. Such a person would have been motivated to do so with a reasonable expectation of success to allow for the iterative improvement of the detection as specialized to a particular user.
D.
Claim 7 is rejected under 35 USC § 103 as being unpatentable by Christensen, United States Patent Application Publication 2014/0025287 published on Jan. 23, 2014 in view of Ichimura et al. (“Ichimura”), United States Patent Application Publication 2019/0179147 published on June 13, 2019 in further view of Kim et al. (“Kim 2015”) United States Patent Application Publication 2015/0199848, published on July 16, 2015.
As to Claim 7, Christensen and Ichimura teach the limitations of claim 1.
Christensen and Ichimura may not explicitly teach: wherein the plurality of targets include a moving object, and wherein the operations further comprise estimating that, in a case in which a state in which the moving object is present in a range in which eyes of the user can see continues for a preset period, the moving object is the target visually recognized by the user.
Kim 2015 teaches in general concepts related to controlling an augmented reality device using the detection of a gaze of a user on a target object (Kim 2015: Abstract). Specifically, Kim 2015 teaches that a target object may be a moving object that is detected by the gaze detection system (Kim 2015: par. 0038).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Christensen-Ichimura disclosures and teachings by including computer instructions to track moving targets as taught and suggested by Kim 2015. Such a person would have been motivated to do so with a reasonable expectation of success to allow for moving POI’s to be also given audio descriptions in Christensen’s system.
E.
Claim 9 is rejected under 35 USC § 103 as being unpatentable by Christensen, United States Patent Application Publication 2014/0025287 published on Jan. 23, 2014 in view of Ichimura et al. (“Ichimura”), United States Patent Application Publication 2019/0179147 published on June 13, 2019 in further view of non-patent literature A. Benoit et al. (“Benoit”), “Head Nods Analysis: Interpretation of Non-Verbal Communication Gestures,” published in 2005 and in further view of Querze III et al. (“Querze”) United States Patent Application Publication 2020/0314524, published on Oct. 1, 2020.
As to Claim 9, Christensen and Ichimura teach the limitations of claim 1.
Christensen and Ichimura may not explicitly teach: the operations further comprise: acquiring intention definition data which defines a non-verbal motion corresponding to a culture to which a language to be used by the user belongs, and estimating the intention of the user based on the intention definition data and the motion of the head of the user.
Benoit teaches in general a real time frequency method to detect movements of heads and to interpret the head nods as non verbal communication (Benoit: Abstract). Specifically, Benoit notes that certain cultures have different systems of semantic meaning for the different head movements (Benoit: Sec. V.2, “Note that our algorithm interprets head nods in regard of a particular culture. Indeed, in some cultures (e.g. Indian), head nods should be interpreted the other way around i.e. other motion orientations and a different motion sequence.”
Querze teaches in general concepts related to controlling a wearable audio device with a personally attributed audio engine (Querze: Abstract). Specifically, Querze teaches that sensors may detect the movement of a user’s head and determine the gaze direction of a user (Querze: par. 0044). Language identification may be also fed into the personal audio engine in considering how to interpret the audio control commands (Querze: par. 0105).
It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Christensen-Ichimura disclosures and teachings by including computer instructions to identify the user language as taught and suggested by Querze and to then utilize that language to infer the culture and thus the head gesture meanings as taught and suggested by Benoit. Such a person would have been motivated to do so with a reasonable expectation of success to allow for the real time interpretation of the head motion movements (Benoit: VI).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gordon et al., US Patent 11,343,613 (May 24, 2022) (describing a location-based audio information system);
Arrasvuori., US Patent Application Publication 2011/0047509 (Feb.. 24, 2011) (describing POI grouping on a map with different levels of detail).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES T TSAI whose telephone number is (571)270-3916. The examiner can normally be reached M-F 8-5 Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached on (571)272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES T TSAI/Primary Examiner, Art Unit 2174