DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 10-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the computer readable storage media is exemplary of a transmission medium for data structures and message structures, as described by Applicant’s Specification (Para 83). Thus, the claims directed to a signal are not patent eligible subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 are is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Qi Sun et al., CN 109765989 A.
Independent claim 1, Sun discloses a method for localizing an artificial reality system in a real-world space, the method comprising:
detecting the real-world space around the artificial reality system (i.e. Thus, embodiments of the disclosure describe generating and using dynamic map to render accurately reflect various aspects of the virtual environment objects in the real world…In this manner, virtual camera sensing sampling can (e.g., based on the virtual camera detects scene to sample priority ranking for the virtual view of the scene of the user) – p. 4, Para 5);
identifying a failure of automatically matching the real-world space to previously mapped real-world spaces (i.e. mass assembly 216 may be used to determine the quality of dynamic map. – p. 8, Para 4; the quality component 216 can determine an overall quality of the dynamic mapping. A way of determining the quality of is determining the combination of the energy (E) p. 9, Para 4; The energy E is lower, the better the quality of the dynamic mapping – p. 9, Para 5);
receiving a selection of at least one corner in the real-world space, the at least one corner identified using one or more depth sensors integral with the artificial reality system (i.e. For example, when the real-world view virtual view and aligned so that the user watches the two walls in both real world and virtual scene in the connection of the corner, and can increase the transparency of the virtual scene, so that the user can see the alignment corner real world and virtual corner of this alignment may allow user interaction with the corner so as to increase the sense of reality. For example, the alignment corner to transparency of a certain degree in such a manner may allow the touch of the user on the visualization of the virtual corner body and touching real world corner – p. 10, Para 4; receive the virtual camera sensing sample – Fig. 4 “402”);
matching the selected at least one corner to at least one previously mapped corner in the previously mapped real-world spaces (i.e. analyzing the dynamic object boundary to ensure that any change in the position of the dynamic real-world objects are reflected in the virtual environment can be real time. In some embodiments, seam carving for image engraving sequence can also be incorporated in a quality determining to allow the operation target selection wall again to better fit the dynamic user location and real world space – p. 11, Para 4),
wherein the at least one previously mapped corner was previously designated for the artificial reality system and associated with localization data for the real-world space (i.e. the user can see the alignment corner real world and virtual corner. of this alignment may allow user interaction with the corner so as to increase the sense of reality – p, 10, Para 4; The dynamic mapping system shown can be connected to data storage library 202 operate together. computer instructions (e.g., software program instructions, routine or service), data and/or model data repository 202 can be stored in the herein described embodiments is used. In some implementation, the data repository 202 can store the mapping via dynamic system 204 of various engine and/or component to the received information or data, and providing access to the information or data to the engine and/or components as required – p. 7, Para 1; Fig. 2; profile 614 describing the distortion of the virtual environment. point 616 represents the mapping of the position of the sample (such as sample 610). profile the rest portion 614 is the scene of a non-sampling point (such as 612) – Fig. 6B), and
wherein the localization data includes at least one of mesh data, spatial anchor data, scene data, artificial reality space model data, boundary data, or any combination thereof, for the real-world space (i.e. the virtual environment and real world space can be represented as a polygonal shape – p. 8, Para 2; analyzing the dynamic object boundary to ensure that any change in the position of the dynamic real world objects are reflected in the virtual environment can be real time – p. 11, Para 5);
recovering the localization data corresponding to the previously mapped real-world space having the at least one previously mapped corner matched to the selected at least one corner (i.e. profile 620 of the rest part is composed of non-sampling point of the scene (such as 612). as shown, dynamic mapping can change in real time to real world space (e.g., comparing profile 614 and associated mapping samples of the profile 620 and the associated mapping sample).- Fig. 6C); and
rendering an artificial reality experience, on the artificial reality system, relative to the real-world space, using the recovered localization data (i.e. at the frame 308, virtual scene real-time rendering may be performed. rendering can be used to utilize the dynamic mapping between the virtual environment and real-world space to generate a visualization of the virtual scene of the virtual environment – p. 11, Para 5).
Claim 2, Sun discloses the method of claim 1, wherein the localization data includes the mesh data for the real-world space, and wherein recovering the localization data includes: capturing a mesh for the real-world space by scanning the real-world space with the artificial reality system (i.e. the virtual environment and real world space can be represented as a polygonal shape – p. 8, Para 2); and matching the captured mesh to a previously generated mesh stored in the mesh data (i.e. the tracking marks may be used, so that the camera can detect the three-dimensional position of the real-world object. after determining the position of the real world object, matching component may attempt to world object using a predefined virtual object and the virtual object position list of the real and virtual object matching – p. 10, Para 2; profile 620 of the rest part is composed of non-sampling point of the scene (such as 612). as shown, dynamic mapping can change in real time to real world space (e.g., comparing profile 614 and associated mapping samples of the profile 620 and the associated mapping sample).- Fig. 6C).
Claim 3, Sun discloses the method of claim 1, wherein detecting the real-world space includes obtaining semantic identification of the real-world space, and wherein recovering the localization data for the real-world space is further based on the obtained semantic identification of the real-world space (i.e. tracking marks may be used, so that the camera can detect the three-dimensional position of the real-world object. after determining the position of the real world object, matching component may attempt to world object using a predefined virtual object and the virtual object position list of the real and virtual object matching. A matching method can use the height value of the height value of the real world object and the virtual object are compared. For example, if the height value of the real world chair and the height value of the virtual chair within a predetermined threshold of each other, then the matching component can match the real world chair and virtual chair. Thus, when the real world object and the virtual object matching, matching component can be the virtual object is placed at the position of the real world object – p. 10, Para 2).
Claim 4, Sun discloses the method of claim 1, wherein the selected at least one corner includes two adjacent corners (i.e. For example, when the real world view virtual view and aligned so that the user watches the two walls in both real world and virtual scene in the connection of the corner, and can increase the transparency of the virtual scene, so that the user can see the alignment corner real world and virtual corner. of this alignment may allow user interaction with the corner so as to increase the sense of reality. For example, the alignment corner to transparency of a certain degree in such a manner may allow the touch of the user on the visualization of the virtual corner body and touching real world corner – p. 10, Para 4).
Claim 5, Sun discloses the method of claim 4, wherein the method further comprises: identifying three walls of the real-world space using the two adjacent corners, wherein recovering the localization data includes: matching the identified three walls of the real-world space to three previously designated walls identified in the localization data (i.e. For example, when the real world view virtual view and aligned so that the user watches the two walls in both real world and virtual scene in the connection of the corner, and can increase the transparency of the virtual scene, so that the user can see the alignment corner real world and virtual corner. of this alignment may allow user interaction with the corner so as to increase the sense of reality. For example, the alignment corner to transparency of a certain degree in such a manner may allow the touch of the user on the visualization of the virtual corner body and touching real world corner – p. 10, Para 4; the user field of view includes three walls including two vertical walls and the floor - Fig. 9).
Claim 6, Sun discloses the method of claim 1, wherein at least one of the at least one previously mapped corner in the previously mapped real-world space was previously designated by a manual selection by a user of the artificial reality system (i.e. the virtual environment can access or reference by dynamic mapping engine 206 for dynamically mapping to real world space. In this aspect, the dynamic mapping engine 206 may access or retrieve virtual environment via the user equipment, comprising the user is currently viewing the virtual scene. As another example, the dynamic mapping engine 206 from data repository 202 and/or from a remote device (such as from a server or user device) receives the virtual environment. – p. 7, Para 4; dynamic mapping performing alignment may allow user interaction with the corner so as to increase the sense of reality - p. 10, Para4; dynamic mapping is used to adapt the change of operation when the user mobile and a virtual environment and/or in the real world space – p. 11, Para 1-2; analyzing the dynamic object boundary to ensure that any change in the position of the dynamic real world objects are reflected in the virtual environment can be real time – p. 11, Para 5).
Claim 7, Sun discloses the method of claim 1, wherein at least one of the at least one previously mapped corner in the previously mapped real-world space was previously designated by an automatic selection by the artificial reality system (i.e. For example, when the user changes the field of view in the virtual environment, dynamic mapping can automatically in real time. As another example, when the real world space to any change, dynamic mapping can automatically in real time – p. 7, Para 4).
Claim 8, Sun discloses the method of claim 1, wherein the localization data is manually adjustable by a user of the artificial reality system (i.e. the virtual environment can access or reference by dynamic mapping engine 206 for dynamically mapping to real world space. In this aspect, the dynamic mapping engine 206 may access or retrieve virtual environment via the user equipment, comprising the user is currently viewing the virtual scene. As another example, the dynamic mapping engine 206 from data repository 202 and/or from a remote device (such as from a server or user device) receives the virtual environment. – p. 7, Para 4; dynamic mapping performing alignment may allow user interaction with the corner so as to increase the sense of reality - p. 10, Para4; dynamic mapping is used to adapt the change of operation when the user mobile and a virtual environment and/or in the real world space – p. 11, Para 1-2).
Claim 9, Sun discloses the method of claim 1, further comprising: displaying at least a portion of the recovered localization data prior to rendering the artificial reality experience (i.e. dynamic mapping is used to adapt the change of operation when the user mobile and a virtual environment and/or in the real world space. the utilization of dynamic mapping that reduces the distortion, and provides more convincing and pleasant immersive as visual and tactile perception – p. 4, Para 4; dynamically mapping dynamic mapping and to render the virtual scene of the virtual environment of the visualization method 300; variation of field of view of the user may be due to the virtual object and/or a real world object is moved – p. 11, Para 1-2; Fig. 3; Fig. 9).
Independent claim 10, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein.
Claims 11-16 and 18-20, the corresponding rationale as applied in the rejection of claims 1-9 apply herein.
Independent claim 17, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANTE HARRISON whose telephone number is (571)272-7659. The examiner can normally be reached Monday - Friday 8:00 am to 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHANTE E HARRISON/Primary Examiner, Art Unit 2615