Prosecution Insights
Last updated: April 19, 2026
Application No. 18/291,782

INFORMATION PROVIDING SYSTEM, METHOD, AND PROGRAM

Final Rejection §103§112
Filed
Jan 24, 2024
Examiner
NEWAY, SAMUEL G
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Connectome Design Inc.
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
83%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
517 granted / 686 resolved
+13.4% vs TC avg
Moderate +8% lift
Without
With
+7.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
715
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
34.5%
-5.5% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 686 resolved cases

Office Action

§103 §112
DETAILED ACTION This is responsive to the amendment filed 03 December 2025. Claims 1-7 remain pending and are considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claims 1-7 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claims 1-6 are objected to because of the following informalities: in lines 3-4 of claim 1 it is believed the limitation “a position direction acquisition unit that acquires, using at least one of radio waves and geomagnetic field position information indicating a position of the user” requires a coma between field and position, i.e. it should read ‘a position direction acquisition unit that acquires, using at least one of radio waves and geomagnetic field, position information indicating a position of the user’. Claim 6 suffers from a similar deficiency. The dependent claims are objected to for depending upon an objected to claim without providing a remedy. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 4 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. The limitations of claim 4 merely repeat the limitations of parent claim 1 (lines 15-18). Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4 and 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Qian et al. (US 2020/0257484) in view of Rosenberg (US 2006/0256133). Claims 1 and 4: Qian discloses an information providing system that provides information using output from an output device worn on a head of a user (Abstract, see also [0014]) comprising: a position direction acquisition unit that acquires, using at least one of radio waves and geomagnetic field position, information indicating a position of the user (“determining, using a processor, a user's geographic position in an area”, [0002], see also “Other conventional location identification methods may also be utilized such as geolocation, radiolocation, and other conventional types of position tracking methods utilized by various other positioning systems”, [0031]), and sight direction information indicating a sight direction that is a direction that a face of the user faces (“identifying, using at least one sensor associated with the information handling device, a user's line of sight”, [0002]); a storage unit that stores, in advance, object position information indicating respective positions of a plurality of objects that may be viewed by the user (“accessing, from an accessible storage location, map data associated with the area”, [0002], see also “the map data may be three-dimensional (3D) map data that may comprise accurate location information for all objects encompassed in the area”, [0032]), and explanation information for explaining each of the plurality of objects (“accessing, from an accessible storage location, map data associated with the area”, [0002], see also “determine an object's identity by identifying the object associated with the user's line of sight and thereafter obtaining the name for that object from the map data. In another embodiment, the user's device may capture an image of the object that may subsequently be provided into an image-based search engine that may be able to determine the object's identity. Responsive to determining the object's identity, an embodiment may also be able to access additional information about the object by referring to a data store accessible to the device (e.g., stored locally, available on a website online, etc.)”, [0035]); an estimation unit that estimates the object being viewed by the user based on the position information and the sight direction information of the user, and the object position information (“accessing, from an accessible storage location, map data associated with the area; identifying, using at least one sensor associated with the information handling device, a user's line of sight; determining, based on the user's geographic position and the map data, an object associated with the user's line of sight; determining, using a processor, an identity of the object”, [0002]); and an information output unit that outputs the explanation information of the estimated object from the voice output device worn on the head of the user (“displaying the extended-reality content for the identified object in a field of view of the information handling device”, [0002], see also “a head-mounted display (HMD) may be worn by a user that can display information about proximate objects in mixed or augmented reality”, [0014]). Qian does not explicitly disclose outputting the explanation information of the estimated object using voice and wherein, when the estimation unit detects that a predetermined time has passed after the user stopped looking at the object having been previously viewed by the user, the information output unit stops outputting the voice for the explanation information of the object that the user has lastly viewed. In an analogous system similarly using an estimation unit to estimate an object being viewed by a user and outputting explanation information of the estimated object, Rosenberg discloses outputting the explanation information of the estimated object using voice (“Upon determining that the user's gaze falls within the defined spatial area, the body of the video stream advertisement is made to play by software routines. Software controlled play of a video segment may be performed using standard video display methods known to the art. For example the video segment may be stored as a standard digital file, such as an MPEG file, which is read from memory, decoded, and displayed upon a particular screen area of a target display screen at a prescribed rate. In general audio content is also accessed from memory and displayed through speakers, headphones, or other audio display hardware at a prescribed rate”, [0010]) and wherein, when the estimation unit detects that a predetermined time has passed after the user stopped looking at the object having been previously viewed by the user, the information output unit stops outputting the voice for the explanation information of the object that the user has lastly viewed (“If it is determined that the user's gaze has left the defined spatial area for more than some threshold amount of time, the playing of the video stream advertisement is halted”, [0010]). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the references to yield the predictable result of outputting Qian’s explanation information as voice via one or more speakers as disclosed by Rosenberg in order to provide vocal information to supplement displayed information. It would have further been obvious to stop outputting the voice for the explanation information of the object that the user has lastly viewed when the estimation unit detects that a predetermined time has passed after the user stopped looking at the object having been previously viewed by the user in order to stop outputting only when it is firmly determined that the user is not interested anymore (see Rosenberg, “a time threshold such that the video stream is not paused unless it is determined by the hardware and software of the present invention that the user has looked away from the defined spatial area for more than that threshold amount of time”, [0010]). Claim 3: Qian in view of Rosenberg discloses the information providing system according to claim 1, wherein the storage unit stores setting information of a visual field in which a range that can be seen by eye of the user is set in advance, and the estimation unit estimates the object being viewed by the user within the range of the visual field set in advance (Qian, [0032]). Claim 6: Qian in view of Rosenberg discloses a method of providing information (Qian, Abstract) using a voice by a computer carried by a user, the method comprising the steps performed by the system of claim 1 as shown above. Claim 7: Qian in view of Rosenberg discloses a program executed by a computer (Qian, [0003]) carried by a user, the program causing the computer to implement the steps performed by the system of claim 1 as shown above. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Qian et al. (US 2020/0257484) in view of Rosenberg (US 2006/0256133) and Noda et al. (US 2015/0104049). Claim 2: Qian in view of Rosenberg discloses the information providing system according to claim 1 but does not explicitly disclose, the storage unit stores, in advance, information indicating a virtual position of a sound source associated with each of the objects, and the information output unit outputs a voice obtained by performing stereophonic processing on the voice indicating the explanation information according to the virtual position of the sound source seen from a current position of the user. In an analogous system similarly outputting explanation information of an estimated object, Noda discloses, a storage unit storing, in advance, information indicating a virtual position of a sound source associated with each of the objects (“The sound generator table 251 stores object IDs 251A, object positions 251B … An object position 251B represents the current position of the object, e.g., the position coordinates (Xm1,Ym1) in a coordinate system originating from a predetermined position”, [0040]), and an information output unit outputting a voice obtained by performing stereophonic processing on the voice indicating the explanation information according to the virtual position of the sound source seen from a current position of the user (“When the watched objet is determined as a target, the audio controller 23 reads, for example, "OPERATE REMOTE CONTROLLER" as a post-determination sound generator file 251G. The audio controller 23 then virtually localizes the sound generator in the position of the target and generates acoustic signals for outputting "OPERATE REMOTE CONTROLLER" from the position of the target”, [0040], see also “The target is an object for which stereophonic acoustic guidance audio, i.e., audio AR, is requested to be implemented from among objects to be watched”, [0052] and [0006]). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the references to yield the predictable result of Qian’s storage unit storing, in advance, information indicating a virtual position of a sound source associated with each of the objects, and outputting a voice obtained by performing stereophonic processing on the voice indicating the explanation information according to the virtual position of the sound source seen from a current position of the user in order to “allow[[s]] the user to know where a target is arranged and how to use the target” (see Noda, [0044]). Allowable Subject Matter Claim 5 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the prior art of record, individually or in combination, does not disclose mobile terminals carried by a plurality of users; and at least one server, wherein each of the mobile terminals includes the position direction acquisition unit, the storage unit, the estimation unit, and the information output unit, the estimation unit of each of the mobile terminals supplies, to the server, visibility data including information for identifying the object estimated to be viewed by the user of each of the mobile terminals, information indicating a date and a time of the estimation, and information for identifying the user of each of the mobile terminals, and the server includes: a data accumulation unit that accumulates the visibility data supplied from the estimation unit of each of the mobile terminals, and a statistical data generation unit that generates, per object, statistical data indicating a distribution of attributes of the users from attribute data indicating the attributes of the users, and the visibility data, and outputs the statistical data as claimed in combination with the other limitations. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL G NEWAY whose telephone number is (571)270-1058. The examiner can normally be reached Monday-Friday 9:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAMUEL G NEWAY/Primary Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Jan 24, 2024
Application Filed
Sep 01, 2025
Non-Final Rejection — §103, §112
Dec 03, 2025
Response Filed
Feb 16, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602538
METHOD AND SYSTEM FOR EXEMPLAR LEARNING FOR TEMPLATIZING DOCUMENTS ACROSS DATA SOURCES
2y 5m to grant Granted Apr 14, 2026
Patent 12603177
INTERACTIVE CONVERSATIONAL SYMPTOM CHECKER
2y 5m to grant Granted Apr 14, 2026
Patent 12603092
AUTOMATED ASSISTANT CONTROL OF NON-ASSISTANT APPLICATIONS VIA IDENTIFICATION OF SYNONYMOUS TERM AND/OR SPEECH PROCESSING BIASING
2y 5m to grant Granted Apr 14, 2026
Patent 12596734
PARSE ARBITRATOR FOR ARBITRATING BETWEEN CANDIDATE DESCRIPTIVE PARSES GENERATED FROM DESCRIPTIVE QUERIES
2y 5m to grant Granted Apr 07, 2026
Patent 12596892
MACHINE TRANSLATION SYSTEM FOR ENTERTAINMENT AND MEDIA
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
83%
With Interview (+7.6%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 686 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month