DETAILED ACTION
This is responsive to the amendment filed 03 December 2025.
Claims 1-7 remain pending and are considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claims 1-7 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Objections
Claims 1-6 are objected to because of the following informalities: in lines 3-4 of claim 1 it is believed the limitation “a position direction acquisition unit that acquires, using at least one of radio waves and geomagnetic field position information indicating a position of the user” requires a coma between field and position, i.e. it should read ‘a position direction acquisition unit that acquires, using at least one of radio waves and geomagnetic field, position information indicating a position of the user’. Claim 6 suffers from a similar deficiency. The dependent claims are objected to for depending upon an objected to claim without providing a remedy.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 4 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. The limitations of claim 4 merely repeat the limitations of parent claim 1 (lines 15-18). Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4 and 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Qian et al. (US 2020/0257484) in view of Rosenberg (US 2006/0256133).
Claims 1 and 4:
Qian discloses an information providing system that provides information using output from an output device worn on a head of a user (Abstract, see also [0014]) comprising:
a position direction acquisition unit that acquires, using at least one of radio waves and geomagnetic field position, information indicating a position of the user (“determining, using a processor, a user's geographic position in an area”, [0002], see also “Other conventional location identification methods may also be utilized such as geolocation, radiolocation, and other conventional types of position tracking methods utilized by various other positioning systems”, [0031]), and sight direction information indicating a sight direction that is a direction that a face of the user faces (“identifying, using at least one sensor associated with the information handling device, a user's line of sight”, [0002]);
a storage unit that stores, in advance, object position information indicating respective positions of a plurality of objects that may be viewed by the user (“accessing, from an accessible storage location, map data associated with the area”, [0002], see also “the map data may be three-dimensional (3D) map data that may comprise accurate location information for all objects encompassed in the area”, [0032]), and explanation information for explaining each of the plurality of objects (“accessing, from an accessible storage location, map data associated with the area”, [0002], see also “determine an object's identity by identifying the object associated with the user's line of sight and thereafter obtaining the name for that object from the map data. In another embodiment, the user's device may capture an image of the object that may subsequently be provided into an image-based search engine that may be able to determine the object's identity. Responsive to determining the object's identity, an embodiment may also be able to access additional information about the object by referring to a data store accessible to the device (e.g., stored locally, available on a website online, etc.)”, [0035]);
an estimation unit that estimates the object being viewed by the user based on the position information and the sight direction information of the user, and the object position information (“accessing, from an accessible storage location, map data associated with the area; identifying, using at least one sensor associated with the information handling device, a user's line of sight; determining, based on the user's geographic position and the map data, an object associated with the user's line of sight; determining, using a processor, an identity of the object”, [0002]); and
an information output unit that outputs the explanation information of the estimated object from the voice output device worn on the head of the user (“displaying the extended-reality content for the identified object in a field of view of the information handling device”, [0002], see also “a head-mounted display (HMD) may be worn by a user that can display information about proximate objects in mixed or augmented reality”, [0014]).
Qian does not explicitly disclose outputting the explanation information of the estimated object using voice and wherein, when the estimation unit detects that a predetermined time has passed after the user stopped looking at the object having been previously viewed by the user, the information output unit stops outputting the voice for the explanation information of the object that the user has lastly viewed.
In an analogous system similarly using an estimation unit to estimate an object being viewed by a user and outputting explanation information of the estimated object, Rosenberg discloses outputting the explanation information of the estimated object using voice (“Upon determining that the user's gaze falls within the defined spatial area, the body of the video stream advertisement is made to play by software routines. Software controlled play of a video segment may be performed using standard video display methods known to the art. For example the video segment may be stored as a standard digital file, such as an MPEG file, which is read from memory, decoded, and displayed upon a particular screen area of a target display screen at a prescribed rate. In general audio content is also accessed from memory and displayed through speakers, headphones, or other audio display hardware at a prescribed rate”, [0010]) and wherein, when the estimation unit detects that a predetermined time has passed after the user stopped looking at the object having been previously viewed by the user, the information output unit stops outputting the voice for the explanation information of the object that the user has lastly viewed (“If it is determined that the user's gaze has left the defined spatial area for more than some threshold amount of time, the playing of the video stream advertisement is halted”, [0010]).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the references to yield the predictable result of outputting Qian’s explanation information as voice via one or more speakers as disclosed by Rosenberg in order to provide vocal information to supplement displayed information. It would have further been obvious to stop outputting the voice for the explanation information of the object that the user has lastly viewed when the estimation unit detects that a predetermined time has passed after the user stopped looking at the object having been previously viewed by the user in order to stop outputting only when it is firmly determined that the user is not interested anymore (see Rosenberg, “a time threshold such that the video stream is not paused unless it is determined by the hardware and software of the present invention that the user has looked away from the defined spatial area for more than that threshold amount of time”, [0010]).
Claim 3:
Qian in view of Rosenberg discloses the information providing system according to claim 1, wherein the storage unit stores setting information of a visual field in which a range that can be seen by eye of the user is set in advance, and the estimation unit estimates the object being viewed by the user within the range of the visual field set in advance (Qian, [0032]).
Claim 6:
Qian in view of Rosenberg discloses a method of providing information (Qian, Abstract) using a voice by a computer carried by a user, the method comprising the steps performed by the system of claim 1 as shown above.
Claim 7:
Qian in view of Rosenberg discloses a program executed by a computer (Qian, [0003]) carried by a user, the program causing the computer to implement the steps performed by the system of claim 1 as shown above.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Qian et al. (US 2020/0257484) in view of Rosenberg (US 2006/0256133) and Noda et al. (US 2015/0104049).
Claim 2:
Qian in view of Rosenberg discloses the information providing system according to claim 1 but does not explicitly disclose, the storage unit stores, in advance, information indicating a virtual position of a sound source associated with each of the objects, and the information output unit outputs a voice obtained by performing stereophonic processing on the voice indicating the explanation information according to the virtual position of the sound source seen from a current position of the user.
In an analogous system similarly outputting explanation information of an estimated object, Noda discloses, a storage unit storing, in advance, information indicating a virtual position of a sound source associated with each of the objects (“The sound generator table 251 stores object IDs 251A, object positions 251B … An object position 251B represents the current position of the object, e.g., the position coordinates (Xm1,Ym1) in a coordinate system originating from a predetermined position”, [0040]), and an information output unit outputting a voice obtained by performing stereophonic processing on the voice indicating the explanation information according to the virtual position of the sound source seen from a current position of the user (“When the watched objet is determined as a target, the audio controller 23 reads, for example, "OPERATE REMOTE CONTROLLER" as a post-determination sound generator file 251G. The audio controller 23 then virtually localizes the sound generator in the position of the target and generates acoustic signals for outputting "OPERATE REMOTE CONTROLLER" from the position of the target”, [0040], see also “The target is an object for which stereophonic acoustic guidance audio, i.e., audio AR, is requested to be implemented from among objects to be watched”, [0052] and [0006]).
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the references to yield the predictable result of Qian’s storage unit storing, in advance, information indicating a virtual position of a sound source associated with each of the objects, and outputting a voice obtained by performing stereophonic processing on the voice indicating the explanation information according to the virtual position of the sound source seen from a current position of the user in order to “allow[[s]] the user to know where a target is arranged and how to use the target” (see Noda, [0044]).
Allowable Subject Matter
Claim 5 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: the prior art of record, individually or in combination, does not disclose mobile terminals carried by a plurality of users; and at least one server, wherein each of the mobile terminals includes the position direction acquisition unit, the storage unit, the estimation unit, and the information output unit, the estimation unit of each of the mobile terminals supplies, to the server, visibility data including information for identifying the object estimated to be viewed by the user of each of the mobile terminals, information indicating a date and a time of the estimation, and information for identifying the user of each of the mobile terminals, and the server includes: a data accumulation unit that accumulates the visibility data supplied from the estimation unit of each of the mobile terminals, and a statistical data generation unit that generates, per object, statistical data indicating a distribution of attributes of the users from attribute data indicating the attributes of the users, and the visibility data, and outputs the statistical data as claimed in combination with the other limitations.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL G NEWAY whose telephone number is (571)270-1058. The examiner can normally be reached Monday-Friday 9:00am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAMUEL G NEWAY/Primary Examiner, Art Unit 2657