Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This communication is responsive to the Amendment filed 9/4/2025.
2. Claims 21-22, 26-32 and 35-45 are pending in this application. Claims 21, 37 and 40 are independent claims. In the instant Amendment, claims 1, 21, 30-32, 35-37 and 40 were amended, claims 23-25 and 33-34 were canceled and claims 41-45 were added. This action is made Final.
Claim Rejections - 35 USC § 103
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claim(s) 21-22, 26-32 and 35-45 are rejected under 35 U.S.C. 103 as being unpatentable over Hwang et al (“Hwang” US 2018/0217975) in view of Morris et al (“Morris” US 2018/0017144).
Regarding claim 21, Hwang discloses a method comprising:
at a device including an input device, an image sensor, a display, one or more processors, and non-transitory memory (see paragraph [0021]; e.g., "image capture device, such as a head up display or glasses that contain a camera’; par. 63, 69: image capture device 710 e.g. HUD or wearable device such as google glass; fig. 7; par. 93, 95-96; fig. 10: processors, memory);
obtaining, using the image sensor, one or more images of a physical environment (see paragraph [0063] or [0069]; fig. 8);
obtaining one or more semantic labels associated with the physical environment based on the one or more images of the physical environment (see paragraph [0064]; identifying features such as objects, people; or, par. 70: identify objects such as animals, buildings, people, trees, etc);
receiving, via the input device, text (see paragraph [0067] or [0073]);
determining a first set of one or more text suggestions based on the text and one or more semantic labels (see paragraph [0067] or [0073]); and
displaying, on the display, the first set of one or more text suggestions (see paragraph [0067]; e.g., as indicated, the word suggestion can be touched i.e. it is display; par. 73: the suggestion provision is in line with fig. 5-6 as indicated in par. 73, and as can be in corresponding par. 53, the suggestions are displayed on the GUN).
Hwang does not expressly disclose determining a second set of one or more text suggestions based on the text independent of the one or more semantic labels; and displaying the set of one or more semantic labels.
However, Morris discloses determining a second set of one or more text suggestions based on the text independent of the one or more semantic labels; and displaying the set of one or more semantic labels (see fig 4, 406(1) and 406(N); also see paragraphs [0090]-[0092]; e.g., “FIG. 4 also includes an example of suggestions ranked by relatedness to surroundings of a user from most related to the surroundings (i.e., suggestion 406(1) when an environment identifier and/or a salient object label includes “grocery store”) to not related to the surroundings (i.e., suggestion 406(N)).”).
It would have been obvious to an artisan before the effective filing date of the present invention to include Morris’ teachings in Hwang’s user interface in an effort to provide a more user-friendly interface that simplifies user inputs.
Regarding claim 22, Hwang discloses wherein obtaining the one or more semantic labels associated with the physical environment includes detecting an object in the one or more images of the physical environment and obtaining one or more semantic labels associated with the object (see paragraph [0071] and [0073]; e.g., identifying objects).
Regarding claim 26, Hwang discloses wherein the device further comprises an additional sensor, further comprising obtaining, from the additional sensor, environmental information of the physical environment, wherein obtaining the one or more semantic labels is based on the environmental information (see paragraphs [0063], [0066]-[0077] and [0069]; fig. 8).
Regarding claim 27, Hwang discloses wherein the additional sensor includes a microphone and the environmental information includes audio of the physical environment (see paragraph [0037], [0066] and [0067]).
Regarding claim 28, Hwang discloses wherein obtaining the one or more semantic labels associated with the physical environment includes detecting an event in the audio of the physical environment and obtaining one or more semantic labels associated with the event (see paragraph [0037], [0066] and [0067]).
Regarding claim 29, Hwang discloses wherein the additional sensor includes at least one of a thermometer, barometer, hygrometer, light sensor, or physical locator (see paragraph [0037]).
Regarding claim 30, Hwang discloses further comprising determining a respective one or more weights for the one or more semantic labels associated with the physical environment, wherein determining the first set of one or more text suggestion is further based on the respective one or more weights (see paragraphs [0040]-[0042] and [0053]; e.g., determine highest ranking candidate word).
Regarding claim 31, Hwang discloses wherein the text is entered into a text field, further comprising receiving, via the input device, selection of a particular text suggestion of the first set of one or more text suggestions and entering the particular text suggestion in the text field (see paragraph [0073]; e.g., select suggested word).
Regarding claim 32, Morris discloses receiving, via the input device, selection of a particular text suggestion of the first set of one or more text suggestions and performing a search with the particular text suggestion as a search query (see paragraph [0043]; e.g., input search in search query).
Regarding claim 35, Morris discloses wherein the first set of one or more text suggestions are displayed proximate to the text (see fig 4, 406).
Regarding claim 36, Morris discloses wherein the first set of one or more text suggestions are displayed proximate to a keyboard used to receive the text (see fig 4, 406).
Claim 37 is similar in scope to claim 21 and is therefore rejected under similar rationale.
Regarding claim 38, Hwang discloses wherein the one or more processors are to obtain the one or more semantic labels associated with the physical environment by detecting an object in the one or more images of the physical environment and obtaining one or more semantic labels associated with the object (see paragraph [0071] and [0073]; e.g., identifying objects).
Regarding claim 39, Hwang discloses further comprising an additional sensor, wherein the one or more processors are further to obtain, from the additional sensor, environmental information of the physical environment and to obtain the one or more semantic labels based on the environmental information (see paragraphs [0063], [0066]-[0077] and [0069]; fig. 8).
Claim 40 is similar in scope to claim 21 and is therefore rejected under similar rationale.
Regarding claim 41, Morris discloses wherein the first set of one or more text suggestions is displayed differently than the second set of one or more text suggestions (see paragraph [0097]; e.g., visual indicator- color, size, etc).
Regarding claim 42, Morris discloses wherein text of the first set of one or more text suggestions is displayed differently than the text of the second set of one or more text suggestions (see paragraph [0097]; e.g., visual indicator- color, size, etc).
Regarding claim 43, Morris discloses wherein text of the first set of one or more text suggestions is a different color than the text of the second set of one or more text suggestions (see paragraph [0097]; e.g., visual indicator- color, size, etc).
Regarding claim 44, Morris discloses wherein the first set of one or more text suggestions is displayed on a first portion of the display separate from a second portion of the display in which the second set of one or more text suggestions is displayed (see fig 4).
Regarding claim 45, Morris discloses wherein the first portion of the display is next to the second portion of the display (see fig 4).
Response to Arguments
5. Applicant’s arguments with respect to the claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ouyang et al (US 2017/0308522).
7. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RASHAWN N TILLERY whose telephone number is (571)272-6480. The examiner can normally be reached M-F 9:00a - 5:30p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L Bashore can be reached at (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RASHAWN N TILLERY/Primary Examiner, Art Unit 2174