DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/04/2024 and 12/24/2024 was considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference characters "220" and "320" have both been used to designate Memory. Reference characters "220" and "320" have both been used to designate Memory. Reference characters "230" and "330" have both been used to designate Operating System. Reference characters "240" and "340" have both been used to designate XR Experience Module. Reference characters "242" and "342" have both been used to designate Data Obtaining Unit. Reference characters "202" and "302" have both been used to designate Processing Units(s). Reference characters "208" and "308" have both been used to designate Comm. Interface(s). Reference characters "210" and "310" have both been used to designate Programming Interface(s). Reference characters "248" and "348" have both been used to designate Data Transmission Unit. Reference characters "3010" and "3030" have both been used to designate Obtain information. Figures have inconsistent reference numbering; this is a non-comprehensive list. Further revision is required, additional typos should be fixed accordingly.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, requires the specification to be written in “full, clear, concise, and exact terms.” The specification is replete with terms which are not clear, concise and exact. The specification should be revised carefully in order to comply with 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112. Examples of some unclear, inexact or verbose terms used in the specification are: Many important terms that are cited in Drawings and in the Detailed Description a couple of times, are not cited throughout the rest of paragraphs in the Detailed Description. In paragraph 0163, it discloses Fig. 1A and labels “scene 105” with reference number 105 once. Then, paragraph 0164 mentions “scene” four times, however it on proper labels “scene” twice in paragraph 0164. As well as many other paragraphs without a proper label for “scene”. Paragraph 0029, it labels “display generation component 120” with reference number 120 twice. Then it mentions “display generation component” multiple times throughout the detailed disclosure without a proper label. As well as many other paragraphs without a proper label for “display generation component”, like paragraph 0039. This goes for many different terms throughout the detailed disclosure that do not have a proper label reference. Many inconsistent label numberings for terms that were previously labeled. For example, in detailed disclosure, “scene cameras” is referenced by different numbers, 6-106, 6-102, 6-306. This goes for many different terms throughout the detailed disclosure that have inconsistent reference numbering. Further revision is required, additional typos should be fixed accordingly.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 7 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 7 recites the phrase “visual prominence relative to visual prominence”, which lacks indefinite scope. It is not clearly defined what visual prominence relative to visual prominence means and should be further specified what it means. Further revision is required to provide a clear explanation about the claimed subject matter.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 2, 9, 17, 18 and 19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by MALIA (No. CN-114402275-A “Malia”).
Regarding claim 1, Malia teaches “(Original) A method comprising: at a computer system in communication with a display generation component and one or more input devices:” (According to some embodiments, the method is performed on a computer system having a display generation component and one or more input devices; Page 2, Para 4);
“displaying, via the display generation component, a user interface” (displaying a user interface; Page 2, Para 3); “that includes a first representation of a first location experience” (a representation of the physical environment, wherein the representation of the physical environment includes a representation of a first physical object occupying a first physical space in the physical environment and having a first corresponding object attribute; Page 2, Para 4); “and a first portion of a navigation user interface element with a first orientation” (12E-12F illustrate the process by which a user performs a zoom-in operation on a still view representation of the physical environment 1200; Page 55, Para 9); “that includes a first representation of a first point of interest experience associated with the first location experience” (In response, FIG. 5I shows displaying a real-time representation 518 of the content displayed in the room 502 based on the location of the user 501; Page 28, Para 1);
“while displaying, via the display generation component, the user interface:
detecting, via the one or more input devices, a first input directed to the first representation of the first location experience; and” (The method includes detecting a first input corresponding to a virtual object, wherein movement of the first input corresponds to a request to move the virtual object in the representation of the physical environment relative to the representation of the first physical object; Page 2, Para 4);
“in response to detecting the first input:
changing the display of the navigation user interface element from the first orientation to a second orientation corresponding to the first location experience; and” (a virtual object at a location in the representation of the physical environment that corresponds to a second physical space in the physical environment that is different from the first physical space; Page 2, Para 4);
“displaying the navigation user interface element to include a second portion, different from the first portion and associated with the first location experience.” (The method includes, after receiving the input, displaying an annotation on a portion of the displayed second representation of the second previously captured media, wherein the second previously captured media is different from the first previously captured media and the second representation is; Page 3, Para 1);
Regarding claim 2, Malia teaches “(Original) The method of claim 1, further comprising: while displaying the navigation user interface element: displaying, via the display generation component, a second representation of the first location experience that includes animated content;” (displaying an animated transition from the representation of the first previously captured media item to the representation of the second previously captured media item; Page 3, Para 4);
“receiving, via the one or more input devices, an input selecting the second representation of the first location experience; and” (after (eg, in response to) receiving (1012) an input corresponding to a request to annotate the portion of the first representation, and on the portion of the displayed second representation of the second media… first media (eg, in response to an input corresponding to a selection of the second media) to display in the first representation of the first media; Page 47, Para 7);
“in response to receiving the input selecting the second representation of the first location experience, displaying, via the display generation component, the first location experience.” ((eg, in response to an input corresponding to a selection of the second media) to a physical environment represented in the first representation of the first media based on the determination; Page 48, Para 5);
Regarding claim 9, Malia teaches “(Original) The method of claim 1, further comprising: while displaying the first location experience and the navigation user interface element: displaying, via the display generation component, the first location experience including video having a respective viewpoint that changes over time; and” (one or more cameras (eg, continuously or in fixed A camera that provides a real-time preview of at least a portion of the content within the camera's field of view at intervals and optionally generates a video output including one or more image frames capturing the content within the camera's field of view); Page 4, Para 2);
“displaying, via the display generation component, a visual indication of the respective viewpoint at a location relative to the navigation user interface element that changes over time in accordance with the respective viewpoint that changes over time.” (The method includes when displaying the representation of the field of view: updating the representation of the field of view over time based on changes in the field of view. The change in the field of view includes the movement of the first entity moving the first anchor point, and as the first anchor point moves along the path in the physical environment, the corresponding portion of the representation of the first entity corresponding to the first anchor point moves along the path in the physical environment; Page 4, Para 1);
Regarding claim 17, Malia teaches “(Original) The method of claim 1, further comprising: while displaying, via the display generation component, the first location experience, receiving, via the one or more input devices, one or more inputs corresponding to a request to add an annotation to the first location experience; and” (Upon displaying the first representation of the first media, receiving (1006) corresponds to annotating a portion of the first representation corresponding to the first portion of the physical environment (eg, by adding a virtual object or modifying an existing displayed virtual object, such as 6D-6E, where a note 606 is added to the input of the request to expand the media item 604); Page 47, Para 4);
“in response to receiving the one or more inputs, modifying the first location experience to include the annotation.” (In response to receiving the input, displaying an annotation on the portion of the first representation corresponding to the first portion of the physical environment, the annotation having an annotation based on the physical environment; Page 47, Para 5);
Regarding claim 18, Malia teaches “(Original) A computer system that is in communication with a display generation component and one or more input devices, the computer system comprising:” (According to some embodiments, a computer system (eg, electronic device) includes display generating components (eg, display, projector, head-mounted display, head-up display, etc.), one or more cameras … and one or more input devices; Page 4, Para 2);
“one or more processors;” (one or more processors; Page 2, Para 1);
“memory; and” (memory; Page 2, Para 1);
“one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:” (one or more modules, programs or sets of instructions stored in the memory for performing various functions; Page 2, Para 1);
“displaying, via the display generation component, a user interface” (displaying a user interface; Page 2, Para 3); “that includes a first representation of a first location experience” (a representation of the physical environment, wherein the representation of the physical environment includes a representation of a first physical object occupying a first physical space in the physical environment and having a first corresponding object attribute; Page 2, Para 4); “and a first portion of a navigation user interface element with a first orientation” (12E-12F illustrate the process by which a user performs a zoom-in operation on a still view representation of the physical environment 1200; Page 55, Para 9); “that includes a first representation of a first point of interest experience associated with the first location experience” (In response, FIG. 5I shows displaying a real-time representation 518 of the content displayed in the room 502 based on the location of the user 501; Page 28, Para 1);
“while displaying, via the display generation component, the user interface:
detecting, via the one or more input devices, a first input directed to the first representation of the first location experience; and” (The method includes detecting a first input corresponding to a virtual object, wherein movement of the first input corresponds to a request to move the virtual object in the representation of the physical environment relative to the representation of the first physical object; Page 2, Para 4);
“in response to detecting the first input:
changing the display of the navigation user interface element from the first orientation to a second orientation corresponding to the first location experience; and” (a virtual object at a location in the representation of the physical environment that corresponds to a second physical space in the physical environment that is different from the first physical space; Page 2, Para 4);
“displaying the navigation user interface element to include a second portion, different from the first portion and associated with the first location experience.” (The method includes, after receiving the input, displaying an annotation on a portion of the displayed second representation of the second previously captured media, wherein the second previously captured media is different from the first previously captured media and the second representation is; Page 3, Para 1);
Regarding claim 19, Malia teaches “(Original) A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, cause the computer system to perform a method comprising:” (According to some embodiments, a computer-readable storage medium has stored therein instructions that, when executed by a computer system comprising (and/or in communication with) a display generation component, one or more cameras, one or more input devices, optionally one or more gesture sensors, optionally one or more sensors for detecting intensity of contacts with a touch-sensitive surface, and optionally one or more tactile output generators, cause the computer system to perform, or cause the performance of, the operations of any of the methods described herein; Page 4, Para 2);
“displaying, via the display generation component, a user interface” (displaying a user interface; Page 2, Para 3); “that includes a first representation of a first location experience” (a representation of the physical environment, wherein the representation of the physical environment includes a representation of a first physical object occupying a first physical space in the physical environment and having a first corresponding object attribute; Page 2, Para 4); “and a first portion of a navigation user interface element with a first orientation” (12E-12F illustrate the process by which a user performs a zoom-in operation on a still view representation of the physical environment 1200; Page 55, Para 9); “that includes a first representation of a first point of interest experience associated with the first location experience” (In response, FIG. 5I shows displaying a real-time representation 518 of the content displayed in the room 502 based on the location of the user 501; Page 28, Para 1);
“while displaying, via the display generation component, the user interface:
detecting, via the one or more input devices, a first input directed to the first representation of the first location experience; and” (The method includes detecting a first input corresponding to a virtual object, wherein movement of the first input corresponds to a request to move the virtual object in the representation of the physical environment relative to the representation of the first physical object; Page 2, Para 4);
“in response to detecting the first input:
changing the display of the navigation user interface element from the first orientation to a second orientation corresponding to the first location experience; and” (a virtual object at a location in the representation of the physical environment that corresponds to a second physical space in the physical environment that is different from the first physical space; Page 2, Para 4);
“displaying the navigation user interface element to include a second portion, different from the first portion and associated with the first location experience.” (The method includes, after receiving the input, displaying an annotation on a portion of the displayed second representation of the second previously captured media, wherein the second previously captured media is different from the first previously captured media and the second representation is; Page 3, Para 1);
Allowable Subject Matter
Claims 3-6, 8, and 10-16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US-20220091722-A1 (Faulkner): Discloses a computer system displaying a three-dimensional environment.
CN-114402290-A (Faulkner): Discloses a computer system displaying a three-dimensional environment.
JP-2023503257-A (Magic Leap Inc): Discloses Augmented reality (AR) devices that can be configured to generate a virtual representation of a user's physical environment.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIGITER D PROTAZI whose telephone number is (571)272-7995. The examiner can normally be reached Monday - Friday 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A Broome can be reached at 5712722931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.D.P./Examiner, Art Unit 2612
/Said Broome/Supervisory Patent Examiner, Art Unit 2612