DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11715269 EADE et al. (US Pat. Pub. No. 20180285052, “Eade”) in view of Dintenfass (US Pat. Pub. No. 20180158156 “Dintenfass”).
Appl/Pat No.
Claim Correspondence
Appl. 18780268
1-20
Pat. 11715269
1-20
Claim 1 of instant appl. 18780269
Claim 1 of Pat. 11715269
A computer-implemented method comprising:
A computer-implemented method comprising:
acquiring, via a client device within a real-world environment, information representative of the real-world environment;
acquire, via an imaging device included in the client device, an image of a real-world environment; detect, via a feature detection algorithm, an image feature included in the image; and identify a feature ray associated with the imaging device and the detected image feature;
transmitting the information representative of the real-world environment to a relocalization service;
transmitting the feature ray to a relocalization service that relocalizes the client device within the real-world environment based on at least the feature ray;
receiving, from the relocalization service in response to the information representative of the real-world environment: information associated with an anchor point comprising a mapped position within the real-world environment; and a determined position within the real-world environment of the client device relative to the mapped position of the anchor point;
and receiving, from the relocalization service in response to the feature ray: information associated with an anchor point comprising a mapped position within the real-world environment; and a determined position within the real-world environment of the client device relative to the mapped position of the anchor point.
sending a query comprising an identifier associated with the anchor point to an asset management service;
obtaining, from the asset management service in response to the query, information representative of at least one digital asset;
and presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point.
Claim 1 of instant appl. is different from claim 1 of pat. 11715269:
sending a query comprising an identifier associated with the anchor point to an asset management service; obtaining, from the asset management service in response to the query, information representative of at least one digital asset; and presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point.
Dintenfass teaches sending a query comprising an identifier associated with the anchor point to an asset management service ([0122] “At step 414, the augmented reality user device 200 generates a property token 110. In one embodiment, the augmented reality user device 200 generates a property token 110 comprising the user identifier 108, the location identifier 112, and the property profile 114. In other embodiments, the augmented reality user device 200 generates a property token 110 comprising any other information. At step 416, the augmented reality user device 200 sends the property token 110 to a remote server 102”);
obtaining, from the asset management service in response to the query, information representative of at least one digital asset ([0123] “At step 418, the augmented reality user device 200 receives virtual assessment data 111 from the remote server 102 in response to sending the property token 110 to the remote server 102”).
Dintenfass and claim 1 of pat. 11715269 are analogous art as both of them are related to virtual data processing.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified claim 1 of pat. 11715269 by including sending a query comprising an identifier associated with the anchor point to an asset management service and obtaining, from the asset management service in response to the query, information representative of at least one digital asset as taught by Dintenfass.
The motivation for the above is to collect related information associated with anchor point.
claim 1 of pat. 11715269 modified by Dintenfass is silent about presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point
Eade teaches presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point (Eade Fig. 5 shows virtual object 50 relative to the mapped position of the anchor point 56 and determined position of client device 30 or 34, [0023] “…..These anchors are world-locked, and the holograms are configured to be displayed in a location that is computed relative to the anchor. [0048] Keyframes 60A-Q contain sets of information that can be used to improve the ability of the display device to ascertain its location, and thus help render holograms in stable locations”);
Eade and claim 1 of pat. 11715269 and Dintenfass are analogous art as both of them are related to virtual data processing.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified claim 1 of pat. 11715269 modified by Dintenfass by presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point as taught by Eade.
The motivation for the above is to synchronize all the positions.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12148112 EADE et al. (US Pat. Pub. No. 20180285052, “Eade”) in view of Dintenfass (US Pat. Pub. No. 20180158156 “Dintenfass”).
Appl/Pat No.
Claim Correspondence
Appl. 18780268
1-20
Pat. 12148112
1-20
Claim 1 of Appl. 18780268
Claim 1 of Pat. 12148112
A computer-implemented method comprising:
A computer-implemented method comprising:
acquiring, via a client device within a real-world environment, information representative of the real-world environment;
directing a client device to: acquire, via an imaging device included in the client device an image of a real-world environment; identify a feature ray associated with the imaging device and a detected image feature;
transmitting the information representative of the real-world environment to a relocalization service;
transmitting the feature ray to a relocalization service that relocalizes the client device within the real-world environment based on at least the feature ray;
receiving, from the relocalization service in response to the information representative of the real-world environment: information associated with an anchor point comprising a mapped position within the real-world environment; and a determined position within the real-world environment of the client device relative to the mapped position of the anchor point;
receiving, from the relocalization service in response to feature ray: information associated with an anchor point comprising a mapped position within the real-world environment; and a determined position within the real-world environment of the client device relative to the mapped position of the anchor point;
sending a query comprising an identifier associated with the anchor point to an asset management service;
sending, to an asset management service, a query comprising: an identifier associated with the anchor point.
obtaining, from the asset management service in response to the query, information representative of at least one digital asset; and presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point.
Claim 1 of instant appl. is different from claim 1 of pat. 12148112:
obtaining, from the asset management service in response to the query, information representative of at least one digital asset; and presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point.
Dintenfass teaches obtaining, from the asset management service in response to the query, information representative of at least one digital asset ([0123] “At step 418, the augmented reality user device 200 receives virtual assessment data 111 from the remote server 102 in response to sending the property token 110 to the remote server 102”).
Dintenfass and claim 1 of pat. 11715269 are analogous art as both of them are related to virtual data processing.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified claim 1 of pat. 12148112 by including obtaining, from the asset management service in response to the query, information representative of at least one digital asset as taught by Dintenfass.
The motivation for the above is to collect related information associated with anchor point.
claim 1 of pat. 12148112 modified by Dintenfass is silent about presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point
Eade teaches presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point (Eade Fig. 5 shows virtual object 50 relative to the mapped position of the anchor point 56 and determined position of client device 30 or 34, [0023] “…..These anchors are world-locked, and the holograms are configured to be displayed in a location that is computed relative to the anchor. [0048] Keyframes 60A-Q contain sets of information that can be used to improve the ability of the display device to ascertain its location, and thus help render holograms in stable locations”);
Eade and claim 1 of pat. 12148112 and Dintenfass are analogous art as both of them are related to virtual data processing.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified claim 1 of pat. 12148112 modified by Dintenfass by presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point as taught by Eade.
The motivation for the above is to synchronize all the positions.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11132841. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of instant application is broader than claim 1 of patent 11132842.
Appl/Pat. No.
Claim Correspondence
Appl. 18780268
1-20
Pat. 11132841
1-20
Claim 1 of Appl. 18780268
Claim 1 of Pat. 11132841
A computer-implemented method comprising:
A computer-implemented method comprising:
acquiring, via a client device within a real-world environment, information representative of the real-world environment;
acquiring, via a client device within a real-world environment, information representative of the real-world environment by directing the client device to
capture, via an imaging device included in the client device, an image of a portion of the real-world environment; detect, via a feature detection algorithm, an image feature included in the image; identify, by the client device via a pinhole camera model, a feature ray associated with the imaging device and the detected image feature; and include the feature ray in the information representative of the real-world environment;
transmitting the information representative of the real-world environment to a relocalization service;
transmitting the information representative of the real-world environment to a relocalization service that relocalizes the client device within the real-world environment based at least on the feature ray;
receiving, from the relocalization service in response to the information representative of the real-world environment:
receiving, from the relocalization service in response to the information representative of the real-world environment:
information associated with an anchor point comprising a mapped position within the real-world environment; and a determined position within the real-world environment of the client device relative to the mapped position of the anchor point;
information associated with an anchor point comprising a mapped position within the real-world environment; and a determined position within the real-world environment of the client device relative to the mapped position of the anchor point;
sending a query comprising an identifier associated with the anchor point to an asset management service;
sending a query comprising an identifier associated with the anchor point to an asset management service;
obtaining, from the asset management service in response to the query, information representative of at least one digital asset; and
obtaining, from the asset management service in response to the query, information representative of at least one digital asset;
presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point.
and presenting the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1, 2, 5, 7-10, 12, 13, 15, 17-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over EADE et al. (US Pat. Pub. No. 20180285052, “Eade”) in view of Dintenfass (US Pat. Pub. No. 20180158156 “Dintenfass”).
Regarding claim 12 Eade teaches A system (Fig. 4) comprising:
an acquiring module, stored in memory (Fig. 4 integral part of display device 30), that acquires, via a client device within a real-world environment, information representative of the real-world environment ([0041-0042], [0041] “Alternatively, the first display device 30 may programmatically generate an instruction 52 for a virtual place-located anchor 56 at a world-locked virtual location. For example, the first display device 30 may use sensor data to programmatically identify the picture 310 in the dining room 306 as a virtual location to generate a virtual place-located anchor”);
a transmitting module, stored in memory (Fig. 4 integral part of display device 30), that transmits the information representative of the real-world environment to a relocalization service ([0041] “…..In response to identifying the picture 310, the first display device 30 may programmatically transmit an instruction 52 to computing device 200 to generate a virtual place-located anchor 56 at a world-locked virtual alocation corresponding to a corner of the picture 310”. Fig. 4 server computing device is the claimed “relocalization service”);
a receiving module, stored in memory (Fig. 4 integral part of display device 30), that receives, from the relocalization service in response to the information representative of the real-world environment :
information associated with an anchor point comprising a mapped position within the real-world environment ([0040] “……Responsive to receiving the instruction 52, the server computing device 200 generates a virtual place-located anchor 56 on the wall next to the picture 310 and sends the anchor data 54 of the anchor 56 back to the first display device 30, which uses the anchor data 54 to view a hologram 50 of an image of the art piece”); and
a determined position within the real-world environment of the client device relative to the mapped position of the anchor point (Fig. 4 server 200 sends neighboring map data to the client device 30. This map data 58 has key frames, pose graphs etc that includes determined position of the client, See “[0042]…The computing device 200 retrieves, serializes, and sends the neighboring map data 58 to the first display device 30 as serialized neighboring map data 66. …. This neighborhood may include poses 62A-G, keyframes 60A-D, a neighboring anchor 64 in the vicinity of the target anchor 56. The neighboring map data 58 is further illustrated and described in FIG. 7.” “[0047]…… Poses 62A-T, depicted as small arrows in the pose graphs 80A, 80B, are typically unit vectors that point in the direction of a fixed straight-ahead gaze out of the display of display device, as described above, and the pose graphs record the position of the poses in three-dimensional space over time. [0048] Keyframes 60A-Q contain sets of information that can be used to improve the ability of the display device to ascertain its location, and thus help render holograms in stable locations. As discussed above, examples of data included in keyframes 60A-Q include metadata, observations and patches, and/or image feature descriptors. Metadata may include the extrinsic data of the camera, the time when keyframe was taken, gravity data, temperature data, magnetic data, calibration data, global positioning data, etc. Observations and patches may provide information regarding detected feature points in a captured image, such as corners and high contrast color changes that help correct the estimation of the position and orientation of the display device, and accordingly help better align and position the display of a holographic image via display 20 in three-dimensional space”);
However Eade is silent about a sending module, stored in memory, that sends a query comprising an identifier associated with the anchor point to an asset management service; an obtaining module, stored in memory, that obtains, from the asset management service in response to the query, information representative of at least one digital asset;
Dintenfass teaches a sending module, stored in memory (integral part of augmented reality user device 200) that sends a query comprising an identifier associated with the anchor point to an asset management service ([0122] “At step 414, the augmented reality user device 200 generates a property token 110. In one embodiment, the augmented reality user device 200 generates a property token 110 comprising the user identifier 108, the location identifier 112, and the property profile 114. In other embodiments, the augmented reality user device 200 generates a property token 110 comprising any other information. At step 416, the augmented reality user device 200 sends the property token 110 to a remote server 102”);
an obtaining module, stored in memory (integral part of augmented reality user device 200) that obtains, from the asset management service in response to the query, information representative of at least one digital asset ([0123] “At step 418, the augmented reality user device 200 receives virtual assessment data 111 from the remote server 102 in response to sending the property token 110 to the remote server 102”).
Dintenfass and Eade are analogous art as both of them are related to virtual data processing.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Eade by including a sending module and an obtaining module as part of Eade’s display device and the sending module have sent a query comprising an identifier associated with the anchor point to an asset management service and the obtaining module to have obtained, from the asset management service in response to the query, information representative of at least one digital asset as taught by Dintenfass.
The motivation for the above is to collect related information associated with anchor point.
Eade modified by Dintenfass teaches a presenting module, stored in memory (integral part of Eade’s display device), that presents the digital asset at a position within an artificial environment relative to the determined position of the client device within the real-world environment and the mapped position of the anchor point (Eade Fig. 5 shows virtual object 50 relative to the mapped position of the anchor point 56 and determined position of client device 30 or 34, [0023] “…..These anchors are world-locked, and the holograms are configured to be displayed in a location that is computed relative to the anchor. [0048] Keyframes 60A-Q contain sets of information that can be used to improve the ability of the display device to ascertain its location, and thus help render holograms in stable locations”.
Dintenfass also shows virtual element in front of the user [0124]); and
at least one physical processor that executes the acquiring module, the transmitting module, the receiving module, the sending module, the obtaining module, and the presenting module (Eade [0037] “The processors of the first display device 30 and the second display device 34 execute a common anchor transfer program 38”).
Claim 1 is directed to a method and its steps are similar to the functions and scope of the elements of device claim 12 and is rejected based on the same rationale as specified in the rejection of device claim 12.
Claim 20 is directed to “A non-transitory computer-readable medium” (Eade [0037] and Dintenfass [0058]) and its element are similar to the functions and scope of the elements of device claim 12 and is rejected based on the same rationale as specified in the rejection of device claim 12.
Regarding claims 2 and 13 Eade modified by Dintenfass teaches wherein acquiring the information representative of the real-world environment comprises:
capturing, via an imaging device included in the client device, an image of at least a portion of the real-world environment (Eade [0030] “…..In real time, images
18A and depth images 21A are respectively captured by cameras 18 and depth camera 21, and processed by a feature matching engine 13 executed by processor 12”); and
detecting an orientation of the imaging device at a time of capturing the image of the portion of the real-world environment (Eade [0039] “…..As the users roam about the room 306, the sensors 18 within the first display device 30 and the second display device 34 capture visual and/or inertial tracking data and thereby track the rotational and translational motion of the display devices through the sensor devices 18, which observe the three-dimensional rotation and translation of the sensor device 18 to be recorded as poses 62A-G and keyframes 60A-G, which are subsequently stored as local map data 32 and local map data 36 in the first display device 30 and the second display device 34, respectively. To help each display device orient itself within the room, the display devices may be configured to observe corners in their environments, such as the four corners of each of the two depicted windows, and may use the position of these detected corners of the windows to correct their estimated position, using the predictive corrective algorithm of FIG. 2”).
Regarding claims 5 and 15 Eade modified by Dintenfass teaches, wherein acquiring the information representative of the real-world environment further comprises: detecting at least one image feature included in the image; generating a feature descriptor based on the image feature included in the image and including the feature descriptor as at least part of the information representative of the real-world environment (Eade [0030] “Feature descriptors 11A that describe features such as edges, corners, and other patterns that are detectable through image processing techniques are prestored in a feature library 11 in non-volatile storage device 16. Particular observations of features matching feature descriptors 11A (i.e., feature instances that are detected in a region of interest of an image) may or may not have anchors 56 collocated with them (generally most will not, but some may). The location of each anchors is typically determined by users of the system, but may be programmatically determined as well. In real time, images 18A and depth images 21A are respectively captured by cameras 18 and depth camera 21, and processed by a feature matching engine 13 executed by processor
12 to detect whether features matching the prestored feature descriptors 11A are present in the captured images 18A, 21A by looking for regions in the captured images that match the feature descriptors 11A…. As will be discussed below, sharing of a portion of this aggregated map data with another device, either directly or through intermediary devices such as a server, can enable other devices to more quickly and accurately localize themselves within the physical environment, saving time and processing power for the other devices”);
Regarding claim 7 Eade modified by Dintenfass teaches, wherein the information representative of the real-world environment comprises at least one of: an image of at least a portion of the real-world environment captured by an imaging device included in the client device;
an orientation of the imaging device at a time of capturing the image by the imaging device;
an intrinsic parameter of the imaging device; an image feature included in the image;
a feature descriptor based on the image feature included in the image; or
a feature ray with an origin at a point associated with the imaging device and that intersects the image feature (Eade [0030] “Feature descriptors 11A that describe features such as edges, corners, and other patterns that are detectable through image processing techniques are prestored in a feature library 11 in non-volatile storage device 16. Particular observations of features matching feature descriptors 11A (i.e., feature instances that are detected in a region of interest of an image) may or may not have anchors 56 collocated with them (generally most will not, but some may). The location of each anchors is typically determined by users of the system, but may be programmatically determined as well. In real time, images 18A and depth images 21A are respectively captured by cameras 18 and depth camera 21, and processed by a feature matching engine 13 executed by processor 12 to detect whether features matching the prestored feature descriptors 11A are present in the captured images 18A, 21A by looking for regions in the captured images that match the feature descriptors 11A”).
Regarding claim 8 Eade modified by Dintenfass teaches, wherein the information representative of the real-world environment further comprises at least one of:
a previously determined position of the client device relative to at least one mapped position within the real-world environment;
a confidence level associated with the previously determined position of the client device;
a global positioning system (GPS) coordinate associated with the client device; or a network identifier associated with the real-world environment (Eade [0032] “The processor 12 may use simultaneous localization and mapping (SLAM) techniques, discussed above, based on sensor suite inputs include the image data 18A, depth image data 21A, odeometry data 19A, and GPS data 25A to generate pose graph 80, feature matching data 13A, and surface reconstruction data 82. The pose graph 80 is a directed graph with nodes that are a series of updated poses 33 detected over time. A pose is typically a unit vector with an origin at a predetermined location (x, y, and z) and extending in a predetermined orientation (pitch, yaw, and roll) in the physical space, and is calculated as described in relation to FIG. 2”).
Regarding claims 9 and 17 Eade modified by Dintenfass teaches, tracking a motion of the client device within the real-world environment (Eade [0039] “….As the users roam about the room 306, the sensors 18 within the first display device 30 and the second display device 34 capture visual and/or inertial tracking data and thereby track the rotational and translational motion of the display devices through the sensor devices 18, which observe the three-dimensional rotation and translation of the sensor device 18 to be recorded as poses 62A-G and keyframes 60A-G, which are subsequently stored as local map data 32 and local map data 36 in the first display device 30”); and
updating the determined position within the real-world environment of the client device based on the motion of the client device within the real-world environment and the physical processor further executes the tracking module (Eade “[0028] The IMU 19 measures the position and orientation of the computing device 10 in six degrees of freedom, and also measures the accelerations and rotational velocities. These values can be recorded as a pose graph to aid in tracking the display device 10”).
Regarding claims 10 and 18 Eade modified by Dintenfass teaches determining, at a time of transmitting the information representative of the real-world environment to the relocalization service, an initial position of the client device within the real-world environment (Eade, “[0028] The IMU 19 measures the position and orientation of the computing device 10 in six degrees of freedom, and also measures the accelerations and rotational velocities. These values can be recorded as a pose graph to aid in tracking the display device 10”).
generating an additional anchor point that corresponds to the initial position; and tracking the motion of the client device within the real-world environment relative to the additional anchor point (Eade [0050] “……In the example depicted in FIG. 7, this subset includes anchors 56 and 64 and keyframes 60D-G and poses 62D-I for the first display device 30, and keyframes 60H-J and poses 62J-L for the second display device 34. In this example, the neighborhood of the neighboring map data has been defined to be the anchors, keyframes, and poses that fall within a predetermined distance of the target anchor 56. However, it will be appreciated that the neighborhood can be arbitrarily defined to encompass any shape or size of three-dimensional space surrounding or proximate to the target anchor 56, including neighborhoods that may not necessarily include the target anchor 56 itself”).
Claims 3, 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Eade modified by Dintenfass as applied to claims 2 and 13 above, and further in view of Seeger et al. (US Pat. Pub. No. 20020075389, “Seeger”).
Regarding claims 3 and 14 Eade modified by Dintenfass doesn’t expressly teach adjusting the image by reducing an effect of an intrinsic parameter of the imaging device on the image.
Seeger teaches adjusting an image by reducing an effect of an intrinsic parameter of the imaging device on the image (Seeger [0057] “Referring to FIG. 7, in the second embodiment, the camera further comprises a zoom mechanism 68, for adjusting the focal length (and hence the magnification) of the lens assembly, under the control of the control circuit 40. By providing a zoom facility, it is possible to capture distant portions of an object at a higher resolution, in order to compensate for loss of resolution caused by perspective distortion”, Here Seeger reduces the effect distortion of an intrinsic parameter (focal length)).
Seeger and Eade modified by Dintenfass are analogous art as both of them are related to data processing.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Eade as modified by Dintenfass to have included adjusting the image by reducing an effect of an intrinsic parameter of the imaging device on the image as taught by Seeger.
The motivation for the above is to achieve less distorted image of the real environment.
Regarding claim 4 Eade modified by Dintenfass and Seeger teaches wherein the intrinsic parameter comprises at least one of: a focal length of the imaging device; a principal point of the imaging device; or a skew coefficient of the imaging device (Seeger [0057] “Referring to FIG. 7, in the second embodiment, the camera further comprises a zoom mechanism 68, for adjusting the focal length (and hence the magnification) of the lens assembly, under the control of the control circuit 40. By providing a zoom facility, it is possible to capture distant portions of an object at a higher resolution, in order to compensate for loss of resolution caused by perspective distortion”).
Claims 6 and 16 rejected under 35 U.S.C. 103 as being unpatentable over Eade modified by Dintenfass as applied to claims 2 and 13 above, and further in view of Piemonte et al. (US Pat. Pub. No. 20130321403, “Piemonte”).
Regarding claims 6 and 16 Eade modified by Dintenfass doesn’t expressly teach identifying, based on the image feature and the orientation of the imaging device at the time of capturing the image, a feature ray comprising: an origin at a point associated with the imaging device; and a direction that causes the feature ray to intersect with the image feature; and including the feature ray as at least part of the information representative of the real-world environment.
Piemonte teaches identifying, based on the image feature and the orientation of the imaging device at the time of capturing the image, a feature ray comprising: an origin at a point associated with the imaging device; and a direction that causes the feature ray to intersect with the image feature; and including the feature ray as at least part of the information representative of the real-world environment ([0164] “In this example, it is assumed that a ray is received from the virtual camera or the user's eye into the 3D scene (e.g., into the center of the selected map feature). The pivot point P is the ray intersection of an eye forward vector (i.e., a vector in the direction in which the eye or camera is looking) from the eye into the selected feature in the map. The length of the vector L may be determined based on the known position of the virtual camera in the scene (i.e., its position in 3D space, as represented by its x, y, z coordinates) and the determined position of the selected feature and/or its pivot point (also represented by x, y, z coordinates). Note that when the viewing angle changes, the selected feature may appear to be farther way, but may not actually be farther away. Therefore, the map tool may determine L only once from the initial view of the 3D map”. Refer to Paragraph [0164] P is the image feature and a ray from the camera intersects the point P. This ray is also included with information representative of the real-world object of Eade as modified by Dintenfass for the purpose of providing correct location of both camera and feature);
Piemonte and Eade modified by Dintenfass are analogous art as both of them are related to data processing.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Eade as modified by Dintenfass to have included identifying, based on the image feature and the orientation of the imaging device at the time of capturing the image, a feature ray comprising: an origin at a point associated with the imaging device; and a direction that causes the feature ray to intersect with the image feature; and including the feature ray as at least part of the information representative of the real-world environment as taught by Piemonte buy including ray information representative of the real-world object of Eade as modified by Dintenfass.
The motivation for the above is to provide correct location of both camera and feature.
Claims 11 and 19 rejected under 35 U.S.C. 103 as being unpatentable over Eade modified by Dintenfass as applied to claims 2 and 13 above, and further in view of Perez et al. (US Pat. Pub. No. 20130293530, “Perez”).
Regarding claims 11 and 19 Eade modified by Dintenfass calculates position of client device based on information representative of the real world environment (Eade “[0026] Computing device 10 also typically includes a six degree of freedom inertial motion unit 19 that includes accelerometers, gyroscopes, and possibly magnometers configured to measure the position of the computing device in six degrees of freedom, namely x, y, z, pitch, roll and yaw.”) but doesn't expressly teach determining a coarse position of the client device based on a part of the information representative of the real-world environment; identifying a fine position of the client device based on the coarse position and an additional part of the information representative of the real-world environment; and selecting the anchor point based on the fine position of the client device.
Perez teaches, determining a coarse position of the client device based on a part of an information representative of a real-world environment; identifying a fine position of a client device based on the coarse position and an additional part of the information representative of the real-world environment; (Refer to Perez [0134] Fig. 11 step 1102, “[0134] FIG. 11 is a flow chart illustrating the steps of FIG. 10A in additional detail. At step 1102, the wearer location may be determined from GPS and other location-based data. For example, the system may make a general, coarse location determination by knowing that the wearer's processing device is connected to the wearer's own Wi-Fi network, and use depth information from a camera 20a and/or the display device 2 to itself to determine the more exact location of the wearer within the environment.” Here coarse position of the wearer (same as client device) is determined and then based on the coarse information and additional data (depth information), fine position of the client device is identified. )
Perez and Eade modified by Dintenfass are analogous art as both of them are related to data processing of user device.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Eade as modified by Dintenfass to have included determining a coarse position of the client device based on a part of the information representative of the real-world environment; identifying a fine position of the client device based on the coarse position and an additional part of the information representative of the real-world environment; and selecting the anchor point based on the fine position of the client device as taught by Perez.
The motivation for the modification is to get accurate position of client device based on multiple sources.
Eade modified by Dintenfass and Perez teaches, selecting the anchor point based on the fine position of the client device (Eade [0047]) teaches selecting the anchor point based on a position of the client device. (“[0047] Turning to FIG. 7, one possible embodiment of the map data applied in the present disclosure is discussed in more detail. The information for the map data may be generated by at least a sensor device in a plurality of display devices sending sensor data, …. Also contained in the map data are a plurality of virtual place-located anchors, including the target anchor 56 and a neighboring anchor 64 at world-locked virtual locations with known three-dimensional coordinates in the physical environment. These anchors may include visibly conspicuous features in the physical environment, such as the picture 310 and clock 312 illustrated in FIG. 5. Poses 62A-T, depicted as small arrows in the pose graphs 80A, 80B, are typically unit vectors that point in the direction of a fixed straight-ahead gaze out of the display of display device, as described above, and the pose graphs record the position of the poses in three-dimensional space over time.” Anchor points are determined based on the pose (position and orientation) of the display device or client/user.
Perez is included above to generate the fine position of the client device.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have further modified Eade as modified by Dintenfass and Perez to have selected the anchor point based on the fine position of the client device (fine position of the client device is included from Perez) for the purpose of providing accurate position of anchor point as the fine position of the client device provides more accurate position which is calculated based on multiple sources.).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAPTARSHI MAZUMDER whose telephone number is (571)270-3454. The examiner can normally be reached 8 am-4 pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at (571)272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAPTARSHI MAZUMDER/Primary Examiner, Art Unit 2612