DETAILED ACTION
1. Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
3. The IDS filed 5/07/2024 and 8/16/2024 are considered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claimed computer program is not a process, machine, manufacture, or composition of matter. The claim must recite that the computer program is stored on a non-transitory medium in order to be considered statutory subject matter. The examiner suggests amending the claim to recite “A non-transitory computer readable storage medium storing a computer program, the computer program comprising instructions that, when executed by a computer system…” in order to overcome this rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claim(s) 1-3, 7-10, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Salter et al. (US 2014/0375683 A1) and further in view of Wantland et al. (US 2014/0282220 A1).
In regard to claim 1, Salter discloses a method, comprising:
at a computer system having a display generation component, one or more input devices, and one or more cameras (Paragraphs 0017-0018 and Paragraphs 0056-0066: computing device with a display, input devices, image sensors, processor, and memory storing executable instructions):
displaying, via the display generation component, a representation of a virtual object in a first user interface region that includes a representation of a field of view of one or more cameras (Fig. 4 elements 102 and 402 (TIME T1), Paragraph 0001, and Paragraph 0024 lines 6-14: displaying field of view of camera (element 102) with a representation of a virtual object (element 402));
detecting movement of the computer system that adjusts the field of view of the one or more cameras (Paragraph 0001 and Paragraph 0026 lines 1-2: user has shifted their field of view toward the right);
and in response to detecting movement of the computer system that adjusts the field of view of the one or more cameras (Paragraph 0001 and Paragraph 0026 lines 1-2: user has shifted their field of view toward the right):
adjusting display of the representation of the virtual object in the first user interface region as the field of view of the one or more cameras is adjusted (Paragraph 0026 lines 10-12: shift in field of view causes object 402 to move),
and, in accordance with a determination that the movement of the computer system causes more than a threshold amount of the virtual object to move outside of a displayed portion of the field of view of the one or more cameras, generating a first alert (Paragraph 0026 lines 10-15, Paragraph 0028, Paragraph 0049, and Paragraphs 0052-0053: the shift causes object to be outside field of view. When an object is fully (e.g. threshold amount of 100%) outside the field of view, a visual and/or audio indicator (e.g. alert) is provided).
While Salter teaches displaying, via the display generation component, a representation of a virtual object in a first user interface region that includes a representation of a field of view of one or more cameras, adjusting display of the representation of the virtual object in the first user interface region as the field of view of the one or more cameras is adjusted, and further teaches that virtual object may be world locked (Paragraph 0045 lines 1-4), they fail to show the wherein the displaying includes maintaining a first spatial relationship between the representation of the virtual object and a plane detected within a physical environment that is captured in the field of view of the one or more cameras and adjusting display of the representation of the virtual object in the first user interface region in accordance with the first spatial relationship between the virtual object and the plane detected within the field of view of the one or more cameras as the field of view of the one or more cameras is adjusted, as recited in the claims. Wantland teaches augmented reality display of virtual objects similar to that of Salter. In addition, Wantland further teaches
maintaining a first spatial relationship between a representation of a virtual object and a plane detected within a physical environment and adjusting display of the representation of the virtual object in the first user interface region in accordance with the first spatial relationship between the virtual object and the plane detected (Paragraph 0016: In some embodiments, a virtual object in an augmented reality image may be presented in a world-locked view relative to a reference object in the physical environment. The term "world-locked" as used herein signifies that the virtual object is displayed as positionally fixed relative to objects in the real-world. This may allow a user to move within the physical environment to view the depicted virtual object from different perspectives, as if the user were walking around a real object… Any suitable reference object or objects may be utilized. Examples include, but are not limited to, geometric planes detected in the physical scene via the image data …).
It would have been obvious to one of ordinary skill in the art, having the teachings of Salter and Wantland before him before the effective filing date of the claimed invention, to modify the displaying, via the display generation component, a representation of a virtual object in a first user interface region that includes a representation of a field of view of one or more cameras, adjusting display of the representation of the virtual object in the first user interface region as the field of view of the one or more cameras is adjusted taught by Salter to include the maintaining a first spatial relationship between a representation of a virtual object and a plane detected within a physical environment and adjusting display of the representation of the virtual object in the first user interface region in accordance with the first spatial relationship between the virtual object and the plane detected of Wantland, in order to obtain wherein the displaying includes maintaining a first spatial relationship between the representation of the virtual object and a plane detected within a physical environment that is captured in the field of view of the one or more cameras and adjusting display of the representation of the virtual object in the first user interface region in accordance with the first spatial relationship between the virtual object and the plane detected within the field of view of the one or more cameras as the field of view of the one or more cameras is adjusted. It would have been advantageous for one to utilize such a combination as allow a user to move within the physical environment to view the depicted virtual object from different perspectives, as if the user were walking around a real object would have been obtained, as suggested by Wantland (Paragraph 0016 lines 6-9).
In regard to claim 2, Salter discloses wherein the computer system includes one or more audio output generators, and generating the first alert includes generating, via the one or more audio output generators, a first audio alert (Paragraphs 0052-0053: audio indications emitting sounds from speakers).
In regard to claim 3, Salter discloses including, after the movement of the computer system causes more than the threshold amount of the virtual object to move outside of the displayed portion of the field of view of the one or more cameras, generating audio associated with the virtual object (Paragraphs 0052-0053: the indication of the object out of the user’s view comprises audio indications).
In regard to claim 7, Salter discloses wherein outputting the first alert includes generating an audio output that indicates an operation that is performed with respect to the virtual object and a resulting state of the virtual object after the performance of the operation (Paragraphs 0052-0053: the audio output indicates that state of the object (e.g. distance from the object to the user) after the object is moved out of view (e.g. operation performed with respect to the object).
In regard to claim 8, Salter discloses wherein the resulting state of the virtual object after performance of the operation is described in the audio output in the first alert in relation to a reference frame corresponding to the physical environment captured in the field of view of the one or more cameras (Paragraphs 0052-0053: the sounds are varied based a property, such as distance from the object to the user or relative location).
In regard to claim 9, the combination of Salter and Wantland further discloses detecting additional movement of the computer system that further adjusts the field of view of the one or more cameras after generation of the first alert; and in response to detecting the additional movement of the computer system that further adjusts the field of view of the one or more cameras: adjusting display of the representation of the virtual object in the first user interface region in accordance with the first spatial relationship between the virtual object and the plane detected within the field of view of the one or more cameras as the field of view of the one or more cameras is further adjusted, and, in accordance with a determination that the additional movement of the computer system causes more than a second threshold amount of the virtual object to move into a displayed portion of the field of view of the one or more cameras, generating a second alert (The rejection of claim 1 and all cited portions and explanations are incorporated herein. Further, Salter teaches providing a second alert when an object comes into the field of view, see Paragraph 0026 lines 3-10. Accordingly, as the combination teaches “wherein the displaying includes maintaining a first spatial relationship between the representation of the virtual object and a plane detected within a physical environment that is captured in the field of view of the one or more cameras and adjusting display of the representation of the virtual object in the first user interface region in accordance with the first spatial relationship between the virtual object and the plane detected within the field of view of the one or more cameras as the field of view of the one or more cameras is adjusted” (see rejection of claim 1) and Salter further teaches generating a second alert when a virtual object moves into the filed of view, the combination reasonably suggests that when a user further adjusts the field of view which would cause the virtual object to move into the field of view, a second alert is provided).
In regard to claim 10, Salter discloses wherein the computer system includes one or more audio output generators, and generating the second alert includes generating, via the one or more audio output generators, a third audio alert (Paragraphs 0052-0053: the visual indicators (e.g. second alert) can comprise an additional audio indication).
In regard to claim 19, system claim 19 corresponds generally to method claim 1 and recites similar features in system form and therefore is rejected under the same rationale.
In regard to claim 20, claim 20 corresponds generally to method claim 1 and recites similar features and therefore is rejected under the same rationale.
Allowable Subject Matter
6. Claims 4-6 and 11-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
In regard to claim 4, the prior art of record, alone or in combination, fails to disclose the recited “wherein outputting the first alert includes generating an audio output that indicates an amount of the virtual object that remains visible on the displayed portion of the field of view of the one or more cameras” in combination with the other elements recited.
In regard to claim 5, the prior art of record, alone or in combination, fails to disclose the recited “wherein outputting the first alert includes generating an audio output that indicates an amount of the displayed portion of the field of view that is occluded by the virtual object.” in combination with the other elements recited.
In regard to claim 6, the prior art of record, alone or in combination, fails to disclose the recited “in response to detecting the input, and in accordance with a determination that the input is detected at a first location on the touch-sensitive surface that corresponds to a first portion of the field of view of the one or more cameras that is not occupied by the virtual object, generating a second audio alert” in combination with the other elements recited.
In regard to claims 11-13, the prior art of record, alone or in combination, fails to disclose the recited “in response to detecting the request to switch to another object manipulation type applicable to the virtual object, generating an audio output that names a second object manipulation type among a plurality of object manipulation types applicable to the virtual object, wherein the second object manipulation type is distinct from the first object manipulation type” in combination with the other elements recited.
In regard to claim 14, the prior art of record, alone or in combination, fails to disclose the recited “in response to detecting the request to switch to another operation applicable to the virtual object in the second user interface region, generating an audio output naming a second operation among the plurality of operations applicable to the virtual object, wherein the second operation is distinct from the first operation” in combination with the other elements recited.
In regard to claims 15-17, the prior art of record, alone or in combination, fails to disclose the recited “generating a fourth audio alert indicating that the virtual object is placed in the first user interface region in relation to the physical environment captured in the field of view of the one or more cameras” in combination with the other elements recited.
In regard to claim 18, the prior art of record, alone or in combination, fails to disclose the recited “while displaying the first user interface region without displaying the first control in the first user interface region, detecting a touch input at a respective location on the touch-sensitive surface that corresponds to the first location in the first user interface region; and in response to detecting the touch input, generating a fifth audio alert including an audio output that specifies an operation corresponding to the first control” in combination with the other elements recited.
Conclusion
7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Andrew et al. (US 10094681 B2), see at least Fig. 4.
Rantala et al. (US 2018/0007262 A1), see at least the abstract.
Naples et al. (US 2017/0309079 A1), see at least the abstract, Fig. 13, and Paragraph 0106.
Mabbutt et al. (US 9041741 B2), see at least the abstract.
Jjwhite (Making Virtual Reality Systems Accessible to Users with Disabilities, https://www.w3.org/WAI/APA/task-forces/research-questions/wiki/index.php?title=Accessibility_of_Virtual_Reality&oldid=248, 10/20/2017).
Things Entertainment (Accessibility Innovation Art Share Out: Augmented Reality Accessibility, https://www.thingsentertainment.net/accessible_augmented_reality_techniques.html, 11/20/2017).
Coughlan et al. (AR4VI: AR as an Accessibility Tool for People with Visual Impairments, IEEE Int Symp Mixed Augment Real, 10/2017).
8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS S ULRICH whose telephone number is (571)270-1397. The examiner can normally be reached M-F 8-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at (571)272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
9. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Nicholas Ulrich/Primary Examiner, Art Unit 2179