Note
motivation statement
112f means claim 1
DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Objections
Claims 1 to 7 recite (1), (2), (3), (4), (5) . Proper correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 – 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claims 1, 6, 7, 8, 13, and 14 recite “…systems/devices…”. It creates ambiguity because is systems/devices directed to: systems or devices or both?
Claim 1 recites astronaut(s), It creates ambiguity because is it directed to one astronaut or more than one astronauts ?
Claim 1 recites avatars(s), It creates ambiguity because is it directed to one avatar or more than one avatars ?
Claim 1 recites user(s), it creates ambiguity because is it directed to one user or more than one users?
Claim 8 recites avatars(s), It creates ambiguity because is it directed to one avatar or more than one avatars ?
Claim 8 recites user(s), it creates ambiguity because is it directed to one user or more than one users?
Dependent claims not mentioned specifically above inherit the deficiencies from the claims stated above on which they depend.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: acquiring by means; generating by means in the claim 6 and claim 7. acquiring by means; generating by means in the claim 13 and claim 14.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 – 14 are rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al. (Publication: US 2019/0094981 A1) in view of Rutschman et al. (Publication: US 2018/0064335 A1).
Regarding claim 1, Bradski discloses a method of providing a [[space]] extended reality service on earth, comprising (
[0018] , [0203] – method provide augmented reality with sensor to perform the mapping of the physical environment around the user with the following method:
[0018] , [0537] - Head-mounted augmented reality device contains memory stores instructions, processed by the processor.) :
a) acquiring, by means of one or more acquisition systems/devices (1) installed on a [[space]] platform (2), real-time data related to a surrounding [[space]] environment and/or to one or more [[astronauts]] (3) in said [[space]] environment ([0780] the AR system may receive input (e.g., visual input, sensory input, auditory input, knowledge bases, etc.) from one or more users of a particular around environment. As described previously, this may be achieved through various input devices, and knowledge already stored in the map database. The user's cameras, sensors, GPS system, eye tracking etc. ,“means”, conveys information to the system (step 5002). It should be appreciated that such information may be collected from a plurality of users to comprehensively populate the map database with real-time and up-to-date information, “real-time data related to a surrounding”.);
b) generating, by means of a computer graphics processing device/system (4), based on the acquired real-time data and on synthetic data, a three-dimensional extended reality environment reproducing the [[space]] environment and one or more three-dimensional avatar(s) of the [[astronaut(s)]] (3) reproducing movements and/or actions and/or facial expressions and/or voice of said [[astronaut(s)]] (3) ([0931] - a wearable system may capture image information and extract fiducials and recognized points 6452. Images may provide textures maps for objects and the world (textures may be real-time videos), “based on the acquired real-time data and on synthetic data”. The wearable local system may calculate pose using one of the pose calculation techniques mentioned. The cloud 6454 may use images and fiducials to segment 3-D objects from more static 3-D background.
[0959] virtual and/or augmented user experience is created such that remote avatars associated with users may be animated based at least in part upon data on a wearable device with input from sources such as voice inflection analysis and facial recognition analysis, as conducted by pertinent software modules. For example, referring back to FIG. 60, the bee avatar 6002 may be animated to have a friendly smile based upon facial recognition of a smile upon the user's face, “reproducing the [[space]] environment and one or more three-dimensional avatar(s) of the [[astronaut(s)]] (3) reproducing movements and/or actions and/or facial expressions and/or voice of said [[astronaut(s)]]”),
wherein the synthetic data digitally represent the [[space]] environment and/or the [[astronaut(s)]] (3) ([0931] - a wearable system may capture image information and extract fiducials and recognized points 6452. Images may provide textures maps for objects and the world (textures may be real-time videos)); and
c) providing one or more users (5) on earth with a [[space]] extended reality service based on the generated three-dimensional extended reality environment and avatar(s) ([0712] The AR system may render an avatar presence in a virtual space with no instrumentation, and allow virtual interaction. The passable world model allows a first user to pass a second user a copy of the first user's section of the world (e.g., a level that runs locally). If the second user's individual AR system is performing local rendering, all the first user's individual AR system needs to send is the skeletal animation.
[1279] - the first user may want to share a file with another user. This action may be animated in a playful manner by populating both the systems through avatars, “service”.
[1277] - Fig 123B, populated the avstar in 3D. Once the icon has been selected, the avatar may open up the game (using the avatar hand gesture, as shown in 12308). The game may then be rendered in 3D to the user. In one embodiment, the avatar may disappear after the user has selected the game, or in other embodiments, the avatar may remain, and the user may be free to choose other options/icons for other functionality as well, “three-dimensional extended reality environment and avatar(s)”.).
Bradski does not however Rutschman discloses
astrounaut(s) and space platform ([0014] – space station, or other crew space craft.)
space platform ([0014] – space station, or other crew space craft.);
space environment ([0014] – outer space flights);
providing a space extended reality service on earth ([0014], [0018] – the regina image device is incorporated into augmented reality headset on the space station.
[0109] in a space environment, the used throughout a space voyage by astronauts to monitor for and detect retinal pathologies. The image processor 412 can obtain retinal image data from the image sensor 408 and perform image analysis to detect one or more potential pathologies. Upon detection, the image processor 412 can immediately transmit via the communication interface along with astronaut-identifying information. Upon detection of an increased signal strength, such as when positioned over the Earth-based ground station, the image processor 412 can transmit retinal imagery associated with the detected pathology, send data to group station “service on earth”);
space extended reality service on earth ([0014], [0018] – the regina image device is incorporated into augmented reality headset on the space station.
[0109] in a space environment, the used throughout a space voyage by astronauts to monitor for and detect retinal pathologies. The image processor 412 can obtain retinal image data from the image sensor 408 and perform image analysis to detect one or more potential pathologies. Upon detection, the image processor 412 can immediately transmit via the communication interface along with astronaut-identifying information. Upon detection of an increased signal strength, such as when positioned over the Earth-based ground station, the image processor 412 can transmit retinal imagery associated with the detected pathology, send data to group station “service on earth”).
A space extended reality service provided on earth ([0014], [0018] – the regina image device is incorporated into augmented reality headset on the space station.
[0109] in a space environment, the used throughout a space voyage by astronauts to monitor for and detect retinal pathologies. The image processor 412 can obtain retinal image data from the image sensor 408 and perform image analysis to detect one or more potential pathologies. Upon detection, the image processor 412 can immediately transmit via the communication interface along with astronaut-identifying information. Upon detection of an increased signal strength, such as when positioned over the Earth-based ground station, the image processor 412 can transmit retinal imagery associated with the detected pathology, send data to group station “service on earth”).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Bradski with astrounaut(s) and space platform; space platform; space environment; providing a space extended reality service on earth; space extended reality service on earth; a space extended reality service provided on earth; as taught by Rutschman. The motivation for doing is to transmitted emergency or urgent information in more timely manner.
Regarding claim 2, Bradski in view of Rutschman disclose all the limitation of claim 1 including astrounaut(s) and space platform and a space extended reality service provided on earth.
Bradski discloses wherein the real-time data include data such as point clouds or the like that digitally represent body position and/or body attitude and/or facial expression of the [[astronaut(s)]] ( [0722] - The AR system may infer a location of a user's avatar simply based on a position of the user's head and/or hands with respect to the environment. The AR system may statistically process voice inflection (e.g., not content of utterances), and animate or modify an emotional expression of the corresponding avatar to reflect an emotion of the respective user which the avatar represents.
[1637], [0802] - the user's face is represented by a plurality of point clouds.
[0719] - The AR system may perform functions such along with real-time texture mapping, applying images (e.g., video) to the avatar, “real-time data”.).
Regarding claim 3, Bradski in view of Rutschman disclose all the limitation of claim 1 including astrounaut(s) and space platform and a space extended reality service provided on earth.
Bradski discloses wherein the real- time data include images and/or photos and/or video data and/or audio data ([0719] - The AR system may perform functions such along with real-time texture mapping, applying images (e.g., video) to the avatar).
Regarding claim 4, Bradski in view of Rutschman disclose all the limitation of claim 1 including astrounaut(s) and space platform and a space extended reality service provided on earth.
Bradski discloses wherein the synthetic data are produced based on one or more computer-aided design models and/or one or more digital twin models that digitally represent the [[space]] environment and/or the [[astronaut(s)]] ( [0607] - the conference room scene may be rendered on the user's table. Thus, even if there is no camera at the conference room, the passable world model, using information collected through prior key frames etc., is able to transmit information about the conference room to other users and recreate the geometry of the room for other users in other spaces, “digital twin models that digitally represent the environment”.
[0187] The data stored in one or more servers 11 within the computing network 5 is, in one embodiment, transmitted or deployed at a high-speed, and with low latency, to one or more user devices 12 and/or gateway components 14. In one embodiment, object data shared by servers may be complete or may be compressed, and contain instructions for recreating the full object data on the user side, rendered and visualized by the user's local computing device (e.g., gateway 14 and/or user device 12), “the synthetic data are produced based on one or more computer-aided design models and/or one or more digital twin models”. ).
Regarding claim 5, Bradski in view of Rutschman disclose all the limitation of claim 1 including astrounaut(s) and space platform and a space extended reality service provided on earth.
Bradski discloses wherein the user(s) (5) use(s) one or more extended reality devices (6) to experience the [[space]] extended reality service ([0808] The system may share basic elements (walls, windows, desk geometry, etc.) with any user who walks into the room in virtual or augmented reality, and in one embodiment that person's system will take images from his particular perspective and upload those to the cloud. Then the cloud becomes populated with old and new sets of data and can run optimization routines and establish fiducials that exist on individual objects.).
Regarding claim 6, A space extended reality service provided on earth by implementing the method as claimed in any one of the preceding claims claim 1 (see rejection on claim 1.).
Regarding claim 7, Bradski discloses a system designed to provide a [[space]] extended reality service on earth, comprising:[[-]] one or more acquisition systems/devices (1) installed on a [[space]] platform (2) and configured to acquire real-time data related to a surrounding [[space]] environment and/or to one or more [[astronauts]] (3) in said [[space]] environment ([0018] , [0203] – method provide augmented reality with sensor to perform the mapping of the physical environment around the user with the following method:
[0018] , [0537] - Head-mounted augmented reality device contains memory stores instructions, processed by the processor.
[0547] - “outward facing” camera that captures images of the ambient environment) ;
and [[-]] a computer graphics processing device/system (4) configured to receive the acquired real-time data and to carry out the steps b) and c) of the method as claimed in any one of claims 1 ([0931] - a wearable system may capture image information and extract fiducials and recognized points 6452. Images may provide textures maps for objects and the world (textures may be real-time videos). see rejection on claim 1.).
Regarding claim 8, Bradski discloses a method of providing an earth extended reality service on a [[space]] platform, comprising ([0018] , [0537] - Head-mounted augmented reality device contains memory stores instructions, processed by the processor.
[0018] , [0203] – method provide augmented reality with sensor to perform the mapping of the physical environment around the user “platform” with the following method:):
a) acquiring, by means of one or more acquisition systems/devices installed on earth, real-time data related to a surrounding ground environment and/or to one or more ground users in said ground environment [0780] the AR system may receive input (e.g., visual input, sensory input, auditory input, knowledge bases, etc.) from one or more users of a particular around environment. As described previously, this may be achieved through various input devices, and knowledge already stored in the map database. The user's cameras, sensors, GPS system, eye tracking etc. ,“means”, conveys information to the system (step 5002). It should be appreciated that such information may be collected from a plurality of users to comprehensively populate the map database with real-time and up-to-date information, “real-time data related to a surrounding”.);
b) generating, by means of a computer graphics processing device/system, based on the acquired real-time data and on synthetic data, a three-dimensional extended reality environment reproducing the ground environment and one or more three-dimensional avatar(s) of the ground user(s) reproducing movements and/or actions and/or facial expressions and/or voice of said ground user(s) ([0931] - a wearable system may capture image information and extract fiducials and recognized points 6452. Images may provide textures maps for objects and the world (textures may be real-time videos), “based on the acquired real-time data and on synthetic data”. The wearable local system may calculate pose using one of the pose calculation techniques mentioned. The cloud 6454 may use images and fiducials to segment 3-D objects from more static 3-D background.
[0959] virtual and/or augmented user experience is created such that remote avatars associated with users may be animated based at least in part upon data on a wearable device with input from sources such as voice inflection analysis and facial recognition analysis, as conducted by pertinent software modules. For example, referring back to FIG. 60, the bee avatar 6002 may be animated to have a friendly smile based upon facial recognition of a smile upon the user's face, “reproducing the ground environment and one or more three-dimensional avatar(s) of the ground user(s) reproducing movements and/or actions and/or facial expressions and/or voice of said ground user(s)”),
wherein the synthetic data digitally represent the ground environment and/or the ground user(s) ([0931] - a wearable system may capture image information and extract fiducials and recognized points 6452. Images may provide textures maps for objects and the world (textures may be real-time videos)); and
c) providing one or more [[astronauts]] on a [[space]] platform with an earth extended reality service based on the generated three-dimensional extended reality environment and avatar(s) ([0712] The AR system may render an avatar presence in a virtual space with no instrumentation, and allow virtual interaction. The passable world model allows a first user to pass a second user a copy of the first user's section of the world (e.g., a level that runs locally). If the second user's individual AR system is performing local rendering, all the first user's individual AR system needs to send is the skeletal animation.
[1279] - the first user may want to share a file with another user. This action may be animated in a playful manner by populating both the systems through avatars, “service”.
[1277] - Fig 123B, populated the avstar in 3D. Once the icon has been selected, the avatar may open up the game (using the avatar hand gesture, as shown in 12308). The game may then be rendered in 3D to the user. In one embodiment, the avatar may disappear after the user has selected the game, or in other embodiments, the avatar may remain, and the user may be free to choose other options/icons for other functionality as well, “three-dimensional extended reality environment and avatar(s)”.).
Bradski does not however Rutschman discloses
astrounaut(s) and space platform ([0014] – space station, or other crew space craft.)
space platform ([0014] – space station, or other crew space craft.);
space environment ([0014] – outer space flights);
providing a space extended reality service on earth ([0014], [0018] – the regina image device is incorporated into augmented reality headset on the space station.
[0109] in a space environment, the used throughout a space voyage by astronauts to monitor for and detect retinal pathologies. The image processor 412 can obtain retinal image data from the image sensor 408 and perform image analysis to detect one or more potential pathologies. Upon detection, the image processor 412 can immediately transmit via the communication interface along with astronaut-identifying information. Upon detection of an increased signal strength, such as when positioned over the Earth-based ground station, the image processor 412 can transmit retinal imagery associated with the detected pathology, send data to group station “service on earth”);
space extended reality service on earth ([0014], [0018] – the regina image device is incorporated into augmented reality headset on the space station.
[0109] in a space environment, the used throughout a space voyage by astronauts to monitor for and detect retinal pathologies. The image processor 412 can obtain retinal image data from the image sensor 408 and perform image analysis to detect one or more potential pathologies. Upon detection, the image processor 412 can immediately transmit via the communication interface along with astronaut-identifying information. Upon detection of an increased signal strength, such as when positioned over the Earth-based ground station, the image processor 412 can transmit retinal imagery associated with the detected pathology, send data to group station “service on earth”).
A space extended reality service provided on earth ([0014], [0018] – the regina image device is incorporated into augmented reality headset on the space station.
[0109] in a space environment, the used throughout a space voyage by astronauts to monitor for and detect retinal pathologies. The image processor 412 can obtain retinal image data from the image sensor 408 and perform image analysis to detect one or more potential pathologies. Upon detection, the image processor 412 can immediately transmit via the communication interface along with astronaut-identifying information. Upon detection of an increased signal strength, such as when positioned over the Earth-based ground station, the image processor 412 can transmit retinal imagery associated with the detected pathology, send data to group station “service on earth”).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Bradski with astrounaut(s) and space platform; space platform; space environment; providing a space extended reality service on earth; space extended reality service on earth; a space extended reality service provided on earth; as taught by Rutschman. The motivation for doing is to transmitted emergency or urgent information in more timely manner.
Regarding claim 9, Bradski in view of Rutschman disclose all the limitation of claim 8 including astrounaut(s) and space platform and a space extended reality service provided on earth.
Bradski discloses wherein the real-time data include data such as point clouds or the like that digitally represent body position and/or body attitude and/or facial expression of the ground user(s) ([0722] - The AR system may infer a location of a user's avatar simply based on a position of the user's head and/or hands with respect to the environment. The AR system may statistically process voice inflection (e.g., not content of utterances), and animate or modify an emotional expression of the corresponding avatar to reflect an emotion of the respective user which the avatar represents.
[1637], [0802] - the user's face is represented by a plurality of point clouds.
[0719] - The AR system may perform functions such along with real-time texture mapping, applying images (e.g., video) to the avatar, “real-time data”.).
Regarding claim 10, Bradski in view of Rutschman disclose all the limitation of claim 8 including astrounaut(s) and space platform and a space extended reality service provided on earth.
Bradski discloses wherein the real- time data include images and/or photos and/or video data and/or audio data (([0719] - The AR system may perform functions such along with real-time texture mapping, applying images (e.g., video) to the avatar). ).
Regarding claim 11, Bradski in view of Rutschman disclose all the limitation of claim 8 including astrounaut(s) and space platform and a space extended reality service provided on earth.
Bradski discloses wherein the synthetic data are produced based on one or more computer-aided design models and/or one or more digital twin models that digitally represent the ground environment and/or the ground user(s) ( [0607] - the conference room scene may be rendered on the user's table. Thus, even if there is no camera at the conference room, the passable world model, using information collected through prior key frames etc., is able to transmit information about the conference room to other users and recreate the geometry of the room for other users in other spaces, “digital twin models that digitally represent the environment”.
[0187] The data stored in one or more servers 11 within the computing network 5 is, in one embodiment, transmitted or deployed at a high-speed, and with low latency, to one or more user devices 12 and/or gateway components 14. In one embodiment, object data shared by servers may be complete or may be compressed, and contain instructions for recreating the full object data on the user side, rendered and visualized by the user's local computing device (e.g., gateway 14 and/or user device 12), “the synthetic data are produced based on one or more computer-aided design models and/or one or more digital twin models”.).
Regarding claim 12, Bradski in view of Rutschman disclose all the limitation of claim 8 including astrounaut(s) and space platform and a space extended reality service provided on earth.
Bradski discloses wherein the [[astronaut(s)]] use(s) one or more extended reality devices to experience the earth extended reality service ([0808] The system may share basic elements (walls, windows, desk geometry, etc.) with any user who walks into the room in virtual or augmented reality, and in one embodiment that person's system will take images from his particular perspective and upload those to the cloud. Then the cloud becomes populated with old and new sets of data and can run optimization routines and establish fiducials that exist on individual objects.).
Regarding claim 13, Earth An earth extended reality service provided on a [[space]] platform by implementing the method as claimed in any one of claims 8 (see rejection on claims 8.).
Regarding claim 14, Bradski discloses A system designed to provide an earth extended reality service on a [[space]] platform, comprising: [[-]] one or more acquisition systems/devices installed on earth and configured to acquire real-time data related to a surrounding ground environment and/or to one or more ground users in said ground environment ([0018] , [0203] – method provide augmented reality with sensor to perform the mapping of the physical environment around the user with the following method:
[0018] , [0537] - Head-mounted augmented reality device contains memory stores instructions, processed by the processor.
[0547] - “outward facing” camera that captures images of the ambient environment) ;
and [[-]] a computer graphics processing device/system configured to receive the acquired real-time data and to carry out the steps b) and c) of the method as claimed in any one of claims 8 ([0931] - a wearable system may capture image information and extract fiducials and recognized points 6452. Images may provide textures maps for objects and the world (textures may be real-time videos). see rejection on claim 1.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MING WU whose telephone number is (571)270-0724. The examiner can normally be reached on Monday - Thursday and alternate Fridays: 9:30am - 6:00pm EST .
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MING WU/
Primary Examiner, Art Unit 2618