DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Priority
Receipt is acknowledged that application claims priority to foreign application with application number DKPA202170432 dated August 30, 2021. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS’s dated February 15, 2024 and April 24, 2024 have been considered and placed in the application file.
Specification
The disclosure is objected to because it contains an embedded hyperlink and/or other form of browser-executable code. Applicant is required to delete the embedded hyperlink and/or other form of browser-executable code; references to websites should be limited to the top-level domain name without any prefix such as http:// or other browser-executable code. See MPEP § 608.01.
Claim Objections
Claims 7-15 are objected to under 37 CFR 1.75(c) as being in improper form because a multiple dependent claim should refer to other claims in the alternative only and cannot depend from any other multiple dependent claims. See MPEP § 608.01(n). Accordingly, the claims have not been further treated on the merits.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-3 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claim 1 recites the limitations, "wherein the toy system further comprises a user input device operatively connected to the processing device and configured to obtain user steering input” and “wherein the toy system further comprises a display operatively connected to the processing device and adapted to render the AR-content according to the position and motion data of the AR-target". Here the phrase “the toy system” has a lack of antecedent basis. Therefore, the scope of the limitation is unclear.
Claim 1 recites the limitation, “transforming the virtual world coordinates with respect to the AR-target position by an opposite vector of the motion vector, thereby representing a movement of the real-world object in respect of the virtual world objects according to the motion of the real-world object;”. Here the phrase “the real-world object” has a lack of antecedent basis. Therefore, the scope of the limitation is unclear.
Claim 1 recites the limitation, “wherein generating AR content associated with the moveable real-world object further comprises rotating the virtual world coordinates with respect to the AR-target by applying a rotation opposite to the user steering input around a corresponding steering axis passing through the updated AR-target position, thereby representing steering of the movement of the real-world object in respect of the virtual world objects according to the user steering input;”. Here the phrase “the user steering input” has a lack of antecedent basis. Therefore, the scope of the limitation is unclear.
Claims 2 and 3 are rejected by virtue of dependency.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 1 and 2 are rejected under 35 U.S.C. 103 as being unpatentable and obvious over US Patent Application Publication US 2021/0192851 A1, (Doptis et al.) (hereinafter Doptis) in view of World Intellectual Property Organization Publication WO 2014/035640 A1, (SOFMAN et al.) (hereinafter Sofman) in view of US Patent Application Publication US 2018/0336729 A1, (Prideaux-Ghee et al.) (hereinafter Prideaux-Ghee), and further in view of Daniel Wagner et al., "ARToolKitPlus for Pose Tracking on Mobile Devices", 2007 (hereinafter Wagner).
Regarding claim 1, Doptis teaches an augmented reality user device adapted to provide augmented reality content associated with a moveable real-world object, (Doptis Abstract, Fig. 1, “[0025] The mobile devices 110 and 112 are used to view a display having image data captured and provided to the mobile devices 110 and 112 by the AR platforms 120 and 122. The mobile devices 110 and 112 may also have one or more IMUs (inertial measurement units), communication capabilities, and processing capabilities capable of generated augmented reality or virtual reality environments.”; “[0046] The AR/VR software 311 is software that generates augmented reality and/or virtual reality, depending on the particular implementation. That AR/VR software 311 may rely upon several inputs. Those inputs may include IMU data (e.g., positional, locational, and orientational data), camera images (e.g., RGB images or associated video captured by a digital camera), infrared or LIDAR images (e.g., spatial images having depth or approximating depth) so that the software 311 can have an understanding of the spatial area in which it is operating, and control input data which may be external control of the location of the associated cameras and IMUs (e.g., on the AR platform 320, as discussed below). The AR/VR software 311 may rely upon information captured by the elements of the AR platform 320 to generate augmented reality objects or a virtual reality environment.”) The Examiner notes, in Doptis “mobile device 310” is the same as “mobile device 110”. wherein the moveable real-world object is adapted to move with respect to a real-world scene according to a characteristic motion pattern, the user device comprising: (Doptis “[0072] The image processing may be simple, such as selecting a frame or a series of frames and analyzing them to identify machine-readable elements. This process may be complex, such as spatial recognition, reliant upon image-only data or image and depth data, to generate a map of the physical area surrounding the AR platform 320 as it moves.”; “[0073] The image processing at 442 may enable the mobile device 310 to construct a relatively detailed understanding of the three-dimensional environment in which the AR platform 320 is operating at any given moment.”; “[0074] …a visual landmark is an aspect of the physical world that is detectable by the mobile device 310 using at least image data captured by the camera(s) 318 on the AR platform 320 and usable by the mobile device 310 to identify the AR platform 320's relative location within a three-dimensional space.”; and “[0075] The visual landmark(s) enable the system to function at least somewhat independently to operate as a game, and enable the system to generate and update its own area map 446. As an AR platform 320 or multiple AR platforms move about in a physical space, they are continuously capturing image data about that space. Over the course of several turns about that space, potentially laps or flights throughout around that space, the AR platforms generate sufficient image data to inform object recognition 313 of the three-dimensional and visual contents of that space.”)
an image capturing device adapted to capture image data of the real-world scene; (Doptis “[0051] …camera(s) 318 may provide a secondary source of information for the object recognition software 313. The mobile device 310 is typically held at a waist height or in front of a user's eyes. So, it may offer a different (e.g., higher or lower) vantage point to enable object recognition to operate more efficiently. The camera(s) 318 may also enable tracking of the AR platform 320, either approximately or accurately, from a vantage point outside of the AR platform 320 itself. The camera(s) 318 may be traditional digital video cameras, but may also include infrared, LIDAR or other depth-sensing cameras.”; “[0053] The AR platform 320 is a mobile and remote augmented reality device incorporating at least one camera. It is an ‘AR’ platform in the sense that it captures video and/or IMU data that is used to generate an augmented reality by the AR/VR software 311 in the mobile device 310. Unlike typical augmented reality systems, the camera(s) 323 used for that augmented reality are remote, operating upon the physically separate AR platform 320, rather than on the mobile device 310.”; and “[0087] A real-world living room 570 is shown, near a kitchen from the perspective of an AR platform 520.”)
a processing unit operatively connected to the image capture device for receiving captured image data, wherein the processing unit is configured to: (Doptis Figs. 2 & 3, “[0032] …the computing device 200 includes a processor 210, memory 220, a communications interface 230, along with storage 240, and an input/output interface 250.”; “[0047] The object recognition software 313 may be a part of the AR/VR software 311 or may be separately implemented. Object recognition software 313 may rely upon camera data or specialized cameras (such as infrared and LIDAR) to perform object recognition to identify objects within images of an environment.”) recognize the moveable real-world object; (Doptis “[0047] The object recognition software 313 may be a part of the AR/VR software 311 or may be separately implemented. Object recognition software 313 may rely upon camera data or specialized cameras (such as infrared and LIDAR) to perform object recognition to identify objects within images of an environment…codes may be used to mark relevant portions of reality or other objects (e.g., other AR platforms 320) in reality.”; “[0048] …the object recognition software 313 may operate using trained neural networks to identify specific objects or object types in substantially real-time. Object recognition software 313 may be able to match the same object each time it passes by or each time it is detected in image data from a camera, or to identify the same three-dimensional shapes or set of shapes each time they pass using specialized cameras. The object recognition software 313 may rely upon both traditional images and specialized images including depth information to perform object recognition.”; and “[0051] The camera(s) 318 may also enable tracking of the AR platform 320, either approximately or accurately, from a vantage point outside of the AR platform 320 itself.”) attribute an augmented reality target to the moveable real-world object; (Doptis Fig. 5, “[0089] …visual landmark 527…may be placed within the world…The visual landmark 527 in this case is a QR code.”; “[0047] In the most basic version of the object recognition software 313, the software operates to identify barcodes, QR codes or other machine-readable elements. Those codes may be used to mark relevant portions of reality or other objects (e.g., other AR platforms 320) in reality.”) track the augmented reality target in captured image data of the real-world scene, so as to obtain tracking data for the augmented reality target, the tracking data comprising actual position data and actual motion data for the AR target, and (Doptis “[0079] …updating the area map 446 and updated display/status may also track progression within an environment. The detected visual landmarks may be used, for example, for tracking positions within a race and race completion or other victory conditions. As an AR platform 320 or group of AR platforms move about a space, relative location and/or progression may be tracked and updated at 440 to determine progression within the competition.”; and “[0080] …optional update to the display/status 440 is to update the display regarding motion data at 448. In cases in which motion data is captured at 420, that data may further inform the characters, augmented reality objects or any virtual reality environment.”) generate augmented reality content associated with the moveable real-world object according to the position and motion data of the AR-target; wherein generating AR content associated with the moveable real-world object comprises defining virtual world objects at virtual world coordinates with respect to the position of the AR-target; and (Doptis “[0069] …the AR/VR software 311 may incorporate one or more augmented reality or virtual reality elements in response to the image data (and motion data) onto the display 314 when it is updated at 440. These updates may be to show augmented reality objects at a new or updated location, to re-center or re-render a virtual reality environment, or to show responses to the status or image data.”)
wherein the toy system further comprises a user input device operatively connected to the processing device and configured to obtain user steering input; (Doptis “[0025] The mobile devices 110 and 112 may also incorporate physical controls (e.g. buttons and analog sticks) for receiving control input from the users 115 and 117. Alternatively, on-screen controls (e.g. capacitive or resistive on-display virtual buttons or joysticks) may operate as controls for the AR platforms 120 and 122. The mobile devices 110 and 112 may also have speakers and/or the capability to integrate with headphones to provide sound from associated augmented reality or virtual reality software operating on the mobile devices 110 and 112.” and “[0027] …may be controlled by a remote control…”)
thereby representing steering of the movement of the real-world object in respect of the virtual world objects according to the user steering input; (Doptis “[0025] The mobile devices 110 and 112 may also incorporate physical controls (e.g. buttons and analog sticks) for receiving control input from the users 115 and 117. Alternatively, on-screen controls (e.g. capacitive or resistive on-display virtual buttons or joysticks) may operate as controls for the AR platforms 120 and 122. The mobile devices 110 and 112 may also have speakers and/or the capability to integrate with headphones to provide sound from associated augmented reality or virtual reality software operating on the mobile devices 110 and 112.” and “[0027] …may be controlled by a remote control…”).
and wherein the toy system further comprises a display operatively connected to the processing device and adapted to render the AR-content according to the position and motion data of the AR-target (Doptis Fig. 3, “[0041] The mobile device 310 includes AR/VR software 311, object recognition software 313, a display 314, an IMU (inertial measurement unit) 316, one or more camera(s) 318 and a user interface/control system 319.”; “[0049] The display 314 is a computer display of sufficient size and quality for display of image data captured by the camera(s) 318 (or, more likely, the camera(s) 323)…may incorporate software controls or readouts of relevant information about the mobile device 310 or AR platform 320. The display is capable of real-time or nearly real-time display of image data transmitted to the mobile device 310 from the AR platform 320 and of overlay of augmented reality (or full replacement with virtual reality) of image data as directed by the AR/VR software 311 without significant slowing of frame rates of the display (approximately less than 60 ms delay).”) The Examiner notes, in Doptis “mobile device 310” is the same as “mobile device 110”.
However, Doptis is silent about transforming the virtual world coordinates with respect to the AR-target position by an opposite vector of the motion vector, thereby representing a movement of the real-world object in respect of the virtual world objects according to the motion of the real-world object and wherein generating AR content associated with the moveable real-world object further comprises rotating the virtual world coordinates with respect to the AR-target by applying a rotation opposite to the user steering input around a corresponding steering axis passing through the updated AR-target position.
Sofman teaches transforming the virtual world coordinates with respect to the AR-target position by an opposite vector of the motion vector, thereby representing a movement of the real-world object in respect of the virtual world objects according to the motion of the real-world object; (Sofman “[0096] …Fig. 4 describes application of virtual parameters to conventional Newtonian physics, one skilled in the art will recognize that any set of rules can be defined to govern the motion of virtual bodies…Virtual forces of numerous types can be introduced arbitrarily and can influence the motion of vehicles 104 differently than would the forces of the real world acting solely according to real-world physics. In this manner, the system of the present invention can simulate and implement behaviors that do not follow real-world laws of physics, but that may follow other rules.”; “[0063] …While the cars are racing on a physical course, the base station maintains a virtual representation of the race state in real time, so that the position, velocity, acceleration, course and other metrics characteristic of moving vehicles are continuously tracked in a re-creation in memory that mirrors the changing state of the physical world.”; “[0065] Thus, in response to the above-described weapons strike on vehicle representation 204L, control algorithms of host device 108 recreate the virtual displacement of vehicle representation 204L in physical environment 201. Thus, physical vehicle 104L is artificially propelled to move in a manner that mimics the displacement of vehicle representation 204L in virtual environment 202. In the example of Fig. 2, physical vehicle 104L, having been struck by a virtual weapon at position 2, is artificially deflected from its current course in physical space.”; “[0067] …the resulting effects in physical environment 201 in turn impact the dynamics or sequence of events in virtual environment 202. The above-described scenario exemplifies the tightly coupled nature of the physical and virtual environments 201, 202 in the system of the present invention. Rather than merely connecting virtual components with physical ones, various embodiments of the present invention are truly symbiotic and bi-directional, such that events and changes occurring in one state (environment) can influence events and changes happening in the other.”; and “[0068] …the system can be configured so that virtual environment 202 dominates physical environment 201, and physical environment 201 simply mirrors the events occurring in virtual environment 202; in at least one embodiment, the opposite configuration may be implemented. Any appropriate priority scheme can be established between the physical and virtual environments 201, 202.”) The Examiner notes this limitation addresses a specific representation of the real-world object’s motion into the corresponding virtual world object. It features a transformation of the virtual world’s coordinates with respect to the AR-target’s position (in the real-world) by an opposite vector of the motion vector. This feature pertains to mapping of parameters from known or pre-determined input mechanisms to parameters of a virtual application. Sofman teaches the motion of the real-world remote-controlled cars can be mapped into a motion in the virtual world according to Newtonian laws of physics and given any arbitrary rules as set by the game designer.
However, Doptis and Sofman are silent about wherein generating AR content associated with the moveable real-world object further comprises rotating the virtual world coordinates with respect to the AR-target by applying a rotation opposite to the user steering input around a corresponding steering axis passing through the updated AR-target position.
Prideaux-Ghee teaches wherein generating AR content associated with the moveable real-world object further comprises rotating the virtual world coordinates with respect to the AR-target (Prideaux-Ghee “[0037] Multiple images may be captured during the relative motion and…a DT may be mapped to (e.g., associated with) the object in each image…the DT may track motion of the object in real-time, thereby allowing for interaction with the object via an image from different perspectives and in real-time.”; “[0043] …to identify the orientation, process 200 identifies one or more features of the object, such as wheel 106 in the loader of FIG. 1.”; “[0051] …instead of or in addition to the user selecting a point of the image by touching a screen or selecting with a pointer, the user interface showing the object can be augmented with a set of visual crosshairs or a target that can remain stationary, such as in the center, relative to the user interface (not illustrated). The user can select a part of the object by manipulating the camera's field of view such that the target points to any point of interest on the object. The process 200 can be configured to continually and/or repeatedly analyze the point in the image under the target to identify any part or parts of the object that correspond to the point under the target.”; and “[0044] That is, the 3D graphical model is oriented in 3D coordinate space so that its features align to identified features of the image. In a state of alignment with the object in the image, the 3D graphical model may be at specified angle(s) relative to axes in the 3D coordinate space. These angles(s) define the orientation of the 3D graphical model and, thus, also define the orientation of the object in the image relative to the camera that captured the image.”)
However, Doptis, Sofman and Prideaux-Ghee are silent about by applying a rotation opposite to the user steering input around a corresponding steering axis passing through the updated AR-target position.
Wagner teaches by applying a rotation opposite to the user steering input around a corresponding steering axis passing through the updated AR-target position, (Wagner Figure 2, “(pg. 2) …ARToolKit uses the marker’s edges for a first, coarse pose detection. In the next step the rotation part of the estimated pose is refined iteratively using matrix fitting. The resulting pose matrix defines a transformation from the camera plane to a local coordinate system in the centre of the marker (see bottom right picture in Figure 2). An application can use these matrices for rendering 3D objects accurately on top of fiducial markers. The final image is displayed on the device’s screen (bottom left picture).”) The Examiner notes the limitation pertains to a specific mapping of the real-world object’s “AR-target” position (in the local space) into its corresponding virtual world coordinates (in the world space). In particular, translation is performed in the opposite rotation to user steering. This means mapping of parameters from given input to parameters of an AR game. Therefore, one skilled in the art would understand that it is common to map the functionalities of an AR app to given input through frameworks such as SLAM, ARKit, ARCore, ARToolKitPlus and/or virtual reality engines such as Unreal Engine or Unity 3D. Thus, “applying a rotation opposite to the user steering input…” can be defined in the AR game as rotating the virtual object in the opposite direction of “X”. In other words, rotate “X” given “Y”.
Doptis, Sofman, Prideaux-Ghee, and Wagner are analogous art as all of them are related to virtual and physical objects.
Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Doptis by transforming the virtual world coordinates with respect to the AR-target position by an opposite vector of the motion vector, thereby representing a movement of the real-world object in respect of the virtual world objects according to the motion of the real-world object as taught by Sofman and wherein generating AR content associated with the moveable real-world object further comprises rotating the virtual world coordinates with respect to the AR-target as taught by Prideaux-Ghee and by applying a rotation opposite to the user steering input around a corresponding steering axis passing through the updated AR-target position, as taught by Wagner and to use all within Doptis augmented reality environment comprising AR platforms.
The motivation for the above is for creating accurate representations of moving physical objects in the virtual world to further enhance the interactive experience of users.
Regarding claim 2, Doptis teaches wherein the rotation includes one or more of a pitch rotation, a roll rotation, and a yaw rotation (Doptis “[0050] …may instruct an on-screen avatar to move forward, backward, jump, or the like.”; “[0122] …may instruct the AR platform 320 (and its camera(s) 323) to move to the left or right or up or down.”)
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable and obvious over Doptis modified by Sofman, Prideaux-Ghee, and Wagner, as applied to claims 1 and 2 above, and further in view of US Patent Application Publication US 2021/0263525 A1, (Das et al.) (hereinafter Das).
Regarding claim 3, Sofman, Prideaux-Ghee, and Wagner are silent about wherein the processing unit is further configured to: determine a trajectory of the AR-target according to the characteristic motion pattern, based on the actual position and motion data; determine whether a loss of tracking of the augmented reality target has occurred; update position and motion data of the AR-target with calculated position and motion data according to the trajectory, in case a loss of tracking has occurred, but otherwise update position and motion data of the AR-target with the actual position and motion data according to the tracking data.
Doptis teaches wherein the processing unit is further configured to: based on the actual position and motion data; but otherwise update position and motion data of the AR-target with the actual position and motion data according to the tracking data (Doptis “[0079] …updating the area map 446 and updated display/status may also track progression within an environment. The detected visual landmarks may be used, for example, for tracking positions within a race and race completion or other victory conditions. As an AR platform 320 or group of AR platforms move about a space, relative location and/or progression may be tracked and updated at 440 to determine progression within the competition.”; and “[0080] …optional update to the display/status 440 is to update the display regarding motion data at 448. In cases in which motion data is captured at 420, that data may further inform the characters, augmented reality objects or any virtual reality environment.”)
However, Doptis, Sofman, Prideaux-Ghee, and Wagner are silent about determining a trajectory of the AR-target according to the characteristic motion pattern, determine whether a loss of tracking of the augmented reality target has occurred; update position and motion data of the AR-target with calculated position and motion data according to the trajectory, in case a loss of tracking has occurred.
Das teaches determine a trajectory of the AR-target according to the characteristic motion pattern, (Das “[0009] The track confidence metric may be utilized to determine whether to output the associated track to the prediction and/or planning components of the automated operation system…In turn, the prediction and/or planning components may utilized the track confidence metric to determine a weight (e.g. a up-weight or down-weight) to give the associated track. The classification (e.g., the coarse and/or fine-grained classifications) may be utilized by the prediction and/or planning components to predict the changes and behavior of the objects associated with the tracks and/or plan a trajectory or other actions for the autonomous operation system.”; “[0018] …a planning component of an autonomous vehicle may predict motion/behavior of the detected object and determine a trajectory and/or path for controlling an autonomous vehicle based at least in part on such current and/or previous data.”; and “[0029] Computing device(s) 106 may comprise a memory 108 storing a perception component 110, a tracking component 112, a combined model 114, a prediction component 116, a planning component 118, and/or system controller(s) 120. As illustrated, the perception component 110 may comprise a tracking component 112 and/or a combined model 114…in FIG. 1…the planning component 118 may determine trajectory 128 based at least in part on the perception data, prediction data and/or other information such as, for example, one or more maps, localization information (e.g., where the vehicle 102 is in the environment relative to a map and/or features detected by the perception component 110), and/or the like. The trajectory 128 may comprise instructions for system controller(s) 120 to actuate drive components of the vehicle 102 to effectuate a steering angle and/or steering rate, which may result in a vehicle position, vehicle velocity, and/or vehicle acceleration…the trajectory 128 may comprise a target heading, target steering angle, target steering rate, target position, target velocity, and/or target acceleration for the controller(s) 120 to track. The perception component 110, the prediction component 116, the planning component 118, and/or the tracking component 112 may include one or more machine-learned (ML) models and/or other computer-executable instructions.”)
determine whether a loss of tracking of the augmented reality target has occurred; (Das “[0007] …a tracking component may be configured to track and output a track comprising the current and/or previous position, velocity, acceleration, and/or heading of a detected object (or tracked object) based on pipeline data received from the one or more pipelines. A track confidence metric may provide a measure of whether an associated track is a true-positive (the corresponding tracked object exists in the environment) or a false-positive (the corresponding tracked object was detected and tracked by the pipelines and tracking component but does not exist in the environment).”)
update position and motion data of the AR-target with calculated position and motion data according to the trajectory, in case a loss of tracking has occurred (Das “[0009] The track confidence metric may be utilized to determine whether to output the associated track to the prediction and/or planning components of the automated operation system…In turn, the prediction and/or planning components may utilized the track confidence metric to determine a weight (e.g. a up-weight or down-weight) to give the associated track. The classification (e.g., the coarse and/or fine-grained classifications) may be utilized by the prediction and/or planning components to predict the changes and behavior of the objects associated with the tracks and/or plan a trajectory or other actions for the autonomous operation system.”; “[0018] …a planning component of an autonomous vehicle may predict motion/behavior of the detected object and determine a trajectory and/or path for controlling an autonomous vehicle based at least in part on such current and/or previous data.”; and “[0007] …a tracking component may be configured to track and output a track comprising the current and/or previous position, velocity, acceleration, and/or heading of a detected object (or tracked object) based on pipeline data received from the one or more pipelines. A track confidence metric may provide a measure of whether an associated track is a true-positive (the corresponding tracked object exists in the environment) or a false-positive (the corresponding tracked object was detected and tracked by the pipelines and tracking component but does not exist in the environment).”)
Doptis, Sofman, Prideaux-Ghee, Wagner, and Das are analogous art as all are related to object recognition or detection.
Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing data of the claimed invention to have modified Doptis by determining a trajectory of the AR-target according to the characteristic motion pattern; determining whether a loss of tracking of the augmented reality target has occurred; and updating position and motion data of the AR-target with calculated position and motion data according to the trajectory, in case a loss of tracking has occurred as taught by Das and use that within Doptis augmented reality environment comprising AR platforms.
The motivation for the above is for moving an AR-target regardless if tracking is lost, and allowing a physical object to continue on its path or trajectory. This improves the overall augmented reality experience for a user as they won’t need to stop and wait for the AR application to “reload”.
Claims 4 and 16 are rejected under 35 U.S.C. 103 as being unpatentable and obvious over US Patent Application Publication US 2021/0192851 A1, (Doptis et al.) (hereinafter Doptis) in view of US Patent Application Publication US 2021/0263525 A1, (Das et al.) (hereinafter Das).
Regarding claim 4, Doptis teaches a toy system comprising: a moveable real-world object, (Doptis Abstract, Fig. 1, “[0025] The mobile devices 110 and 112 are used to view a display having image data captured and provided to the mobile devices 110 and 112 by the AR platforms 120 and 122. The mobile devices 110 and 112 may also have one or more IMUs (inertial measurement units), communication capabilities, and processing capabilities capable of generated augmented reality or virtual reality environments.”; “[0046] The AR/VR software 311 is software that generates augmented reality and/or virtual reality, depending on the particular implementation. That AR/VR software 311 may rely upon several inputs. Those inputs may include IMU data (e.g., positional, locational, and orientational data), camera images (e.g., RGB images or associated video captured by a digital camera), infrared or LIDAR images (e.g., spatial images having depth or approximating depth) so that the software 311 can have an understanding of the spatial area in which it is operating, and control input data which may be external control of the location of the associated cameras and IMUs (e.g., on the AR platform 320, as discussed below). The AR/VR software 311 may rely upon information captured by the elements of the AR platform 320 to generate augmented reality objects or a virtual reality environment.”) The Examiner notes, in Doptis, the “mobile device 310” is the same as “mobile device 110”. wherein the moveable real-world object is adapted to move with respect to a real-world scene according to a characteristic motion pattern; (Doptis “[0072] The image processing may be simple, such as selecting a frame or a series of frames and analyzing them to identify machine-readable elements. This process may be complex, such as spatial recognition, reliant upon image-only data or image and depth data, to generate a map of the physical area surrounding the AR platform 320 as it moves.”; “[0073] The image processing at 442 may enable the mobile device 310 to construct a relatively detailed understanding of the three-dimensional environment in which the AR platform 320 is operating at any given moment.”; “[0074] …a visual landmark is an aspect of the physical world that is detectable by the mobile device 310 using at least image data captured by the camera(s) 318 on the AR platform 320 and usable by the mobile device 310 to identify the AR platform 320's relative location within a three-dimensional space.”; and “[0075] The visual landmark(s) enable the system to function at least somewhat independently to operate as a game, and enable the system to generate and update its own area map 446. As an AR platform 320 or multiple AR platforms move about in a physical space, they are continuously capturing image data about that space. Over the course of several turns about that space, potentially laps or flights throughout around that space, the AR platforms generate sufficient image data to inform object recognition 313 of the three-dimensional and visual contents of that space.”)
an image capture device adapted to capture image data of the real-world scene; and (Doptis “[0051] …camera(s) 318 may provide a secondary source of information for the object recognition software 313. The mobile device 310 is typically held at a waist height or in front of a user's eyes. So, it may offer a different (e.g., higher or lower) vantage point to enable object recognition to operate more efficiently. The camera(s) 318 may also enable tracking of the AR platform 320, either approximately or accurately, from a vantage point outside of the AR platform 320 itself. The camera(s) 318 may be traditional digital video cameras, but may also include infrared, LIDAR or other depth-sensing cameras.”; “[0053] The AR platform 320 is a mobile and remote augmented reality device incorporating at least one camera. It is an ‘AR’ platform in the sense that it captures video and/or IMU data that is used to generate an augmented reality by the AR/VR software 311 in the mobile device 310. Unlike typical augmented reality systems, the camera(s) 323 used for that augmented reality are remote, operating upon the physically separate AR platform 320, rather than on the mobile device 310.”; and “[0087] A real-world living room 570 is shown, near a kitchen from the perspective of an AR platform 520.”)
a processing unit operatively connected to the image capture device for receiving captured image data, wherein the processing unit is configured to: (Doptis Figs. 2 & 3, “[0032] …the computing device 200 includes a processor 210, memory 220, a communications interface 230, along with storage 240, and an input/output interface 250.”; “[0047] The object recognition software 313 may be a part of the AR/VR software 311 or may be separately implemented. Object recognition software 313 may rely upon camera data or specialized cameras (such as infrared and LIDAR) to perform object recognition to identify objects within images of an environment.”)
recognize the moveable real-world object; (Doptis “[0047] The object recognition software 313 may be a part of the AR/VR software 311 or may be separately implemented. Object recognition software 313 may rely upon camera data or specialized cameras (such as infrared and LIDAR) to perform object recognition to identify objects within images of an environment…codes may be used to mark relevant portions of reality or other objects (e.g., other AR platforms 320) in reality.”; “[0048] …the object recognition software 313 may operate using trained neural networks to identify specific objects or object types in substantially real-time. Object recognition software 313 may be able to match the same object each time it passes by or each time it is detected in image data from a camera, or to identify the same three-dimensional shapes or set of shapes each time they pass using specialized cameras. The object recognition software 313 may rely upon both traditional images and specialized images including depth information to perform object recognition.”; and “[0051] The camera(s) 318 may also enable tracking of the AR platform 320, either approximately or accurately, from a vantage point outside of the AR platform 320 itself.”)
attribute an augmented reality target to the moveable real-world object; (Doptis Fig. 5, “[0089] …visual landmark 527…may be placed within the world…The visual landmark 527 in this case is a QR code.”; “[0047] In the most basic version of the object recognition software 313, the software operates to identify barcodes, QR codes or other machine-readable elements. Those codes may be used to mark relevant portions of reality or other objects (e.g., other AR platforms 320) in reality.”)
track the augmented reality target in captured image data of the real-world scene, so as to obtain tracking data for the augmented reality target, the tracking data comprising actual position data and actual motion data for the AR target; (Doptis “[0079] …updating the area map 446 and updated display/status may also track progression within an environment. The detected visual landmarks may be used, for example, for tracking positions within a race and race completion or other victory conditions. As an AR platform 320 or group of AR platforms move about a space, relative location and/or progression may be tracked and updated at 440 to determine progression within the competition.”; and “[0080] …optional update to the display/status 440 is to update the display regarding motion data at 448. In cases in which motion data is captured at 420, that data may further inform the characters, augmented reality objects or any virtual reality environment.”)
but otherwise update position and motion data of the AR-target with the actual position and motion data according to the tracking data; and generate augmented reality content associated with the moveable real-world object according to the updated position and motion data of the AR-target; (Doptis “[0069] …the AR/VR software 311 may incorporate one or more augmented reality or virtual reality elements in response to the image data (and motion data) onto the display 314 when it is updated at 440. These updates may be to show augmented reality objects at a new or updated location, to re-center or re-render a virtual reality environment, or to show responses to the status or image data.”)
wherein the toy system further comprises a display operatively connected to the processing device and adapted to render the AR-content according to the updated position and motion data of the AR-target (Doptis Fig. 3, “[0041] The mobile device 310 includes AR/VR software 311, object recognition software 313, a display 314, an IMU (inertial measurement unit) 316, one or more camera(s) 318 and a user interface/control system 319.”; “[0049] The display 314 is a computer display of sufficient size and quality for display of image data captured by the camera(s) 318 (or, more likely, the camera(s) 323)…may incorporate software controls or readouts of relevant information about the mobile device 310 or AR platform 320. The display is capable of real-time or nearly real-time display of image data transmitted to the mobile device 310 from the AR platform 320 and of overlay of augmented reality (or full replacement with virtual reality) of image data as directed by the AR/VR software 311 without significant slowing of frame rates of the display (approximately less than 60 ms delay).”) The Examiner notes, Doptis teaches “mobile device 310” the same as “mobile device 110”.
However, Doptis is silent about determining a trajectory of the AR-target according to the characteristic motion pattern, based on the actual position and motion data; determining whether a loss of tracking of the augmented reality target has occurred; update position and motion data of the AR-target with calculated position and motion data according to the trajectory, in case a loss of tracking has occurred.
Das teaches determine a trajectory of the AR-target according to the characteristic motion pattern, based on the actual position and motion data; (Das “[0009] The track confidence metric may be utilized to determine whether to output the associated track to the prediction and/or planning components of the automated operation system…In turn, the prediction and/or planning components may utilized the track confidence metric to determine a weight (e.g. a up-weight or down-weight) to give the associated track. The classification (e.g., the coarse and/or fine-grained classifications) may be utilized by the prediction and/or planning components to predict the changes and behavior of the objects associated with the tracks and/or plan a trajectory or other actions for the autonomous operation system.”; “[0018] …a planning component of an autonomous vehicle may predict motion/behavior of the detected object and determine a trajectory and/or path for controlling an autonomous vehicle based at least in part on such current and/or previous data.”; and “[0029] Computing device(s) 106 may comprise a memory 108 storing a perception component 110, a tracking component 112, a combined model 114, a prediction component 116, a planning component 118, and/or system controller(s) 120. As illustrated, the perception component 110 may comprise a tracking component 112 and/or a combined model 114…in FIG. 1…the planning component 118 may determine trajectory 128 based at least in part on the perception data, prediction data and/or other information such as, for example, one or more maps, localization information (e.g., where the vehicle 102 is in the environment relative to a map and/or features detected by the perception component 110), and/or the like. The trajectory 128 may comprise instructions for system controller(s) 120 to actuate drive components of the vehicle 102 to effectuate a steering angle and/or steering rate, which may result in a vehicle position, vehicle velocity, and/or vehicle acceleration…the trajectory 128 may comprise a target heading, target steering angle, target steering rate, target position, target velocity, and/or target acceleration for the controller(s) 120 to track. The perception component 110, the prediction component 116, the planning component 118, and/or the tracking component 112 may include one or more machine-learned (ML) models and/or other computer-executable instructions.”) determine whether a loss of tracking of the augmented reality target has occurred; (Das “[0007] …a tracking component may be configured to track and output a track comprising the current and/or previous position, velocity, acceleration, and/or heading of a detected object (or tracked object) based on pipeline data received from the one or more pipelines. A track confidence metric may provide a measure of whether an associated track is a true-positive (the corresponding tracked object exists in the environment) or a false-positive (the corresponding tracked object was detected and tracked by the pipelines and tracking component but does not exist in the environment).”) update position and motion data of the AR-target with calculated position and motion data according to the trajectory, in case a loss of tracking has occurred (Das “[0009] The track confidence metric may be utilized to determine whether to output the associated track to the prediction and/or planning components of the automated operation system…In turn, the prediction and/or planning components may utilized the track confidence metric to determine a weight (e.g. a up-weight or down-weight) to give the associated track. The classification (e.g., the coarse and/or fine-grained classifications) may be utilized by the prediction and/or planning components to predict the changes and behavior of the objects associated with the tracks and/or plan a trajectory or other actions for the autonomous operation system.”; “[0018] …a planning component of an autonomous vehicle may predict motion/behavior of the detected object and determine a trajectory and/or path for controlling an autonomous vehicle based at least in part on such current and/or previous data.”; and “[0007] …a tracking component may be configured to track and output a track comprising the current and/or previous position, velocity, acceleration, and/or heading of a detected object (or tracked object) based on pipeline data received from the one or more pipelines. A track confidence metric may provide a measure of whether an associated track is a true-positive (the corresponding tracked object exists in the environment) or a false-positive (the corresponding tracked object was detected and tracked by the pipelines and tracking component but does not exist in the environment).”)
Doptis and Das are analogous art as both are related to object detection or recognition.
Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing data of the claimed invention to have modified Doptis by determining a trajectory of the AR-target according to the characteristic motion pattern; determining whether a loss of tracking of the augmented reality target has occurred; and updating position and motion data of the AR-target with calculated position and motion data according to the trajectory, in case a loss of tracking has occurred as taught by Das and use that within Doptis augmented reality environment comprising AR platforms.
The motivation for the above is for creating accurate representations of moving physical objects in the virtual world to further enhance the interactive experience of users. This improves the overall augmented reality experience for a user as they won’t need to stop and wait for the AR application to “reload”.
Claim 16 is directed to a method claim and its steps are similar to the scope and functions performed by the system claim 4 and therefore claim 16 is also rejected with the same rationale as specified in the rejection of claim 4.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable and obvious over Doptis modified by Das, as applied to claims 4 and 16 above, and in view of World Intellectual Property Organization Publication WO 2014/035640 A1, (SOFMAN et al.) (hereinafter Sofman).
Regarding claim 5, Das is silent about wherein generating AR content associated with the moveable real-world object comprises defining virtual world objects at virtual world coordinates with respect to the updated position of the AR-target, and transforming the virtual world coordinates with respect to the updated AR-target position by an opposite vector of the motion vector, thereby representing a movement of the real-world object in respect of the virtual world objects according to the motion of the real-world object.
Doptis teaches wherein generating AR content associated with the moveable real-world object comprises defining virtual world objects at virtual world coordinates with respect to the updated position of the AR-target (Doptis “[0069] …the AR/VR software 311 may incorporate one or more augmented reality or virtual reality elements in response to the image data (and motion data) onto the display 314 when it is updated at 440. These updates may be to show augmented reality objects at a new or updated location, to re-center or re-render a virtual reality environment, or to show responses to the status or image data.”)
However, Doptis and Das are silent about and transforming the virtual world coordinates with respect to the updated AR-target position by an opposite vector of the motion vector, thereby representing a movement of the real-world object in respect of the virtual world objects according to the motion of the real-world object.
Sofman teaches and transforming the virtual world coordinates with respect to the updated AR-target position by an opposite vector of the motion vector, thereby representing a movement of the real-world object in respect of the virtual world objects according to the motion of the real-world object (Sofman “[0096] …Fig. 4 describes application of virtual parameters to conventional Newtonian physics, one skilled in the art will recognize that any set of rules can be defined to govern the motion of virtual bodies…Virtual forces of numerous types can be introduced arbitrarily and can influence the motion of vehicles 104 differently than would the forces of the real world acting solely according to real-world physics. In this manner, the system of the present invention can simulate and implement behaviors that do not follow real-world laws of physics, but that may follow other rules.”; “[0063] …While the cars are racing on a physical course, the base station maintains a virtual representation of the race state in real time, so that the position, velocity, acceleration, course and other metrics characteristic of moving vehicles are continuously tracked in a re-creation in memory that mirrors the changing state of the physical world.”; “[0065] Thus, in response to the above-described weapons strike on vehicle representation 204L, control algorithms of host device 108 recreate the virtual displacement of vehicle representation 204L in physical environment 201. Thus, physical vehicle 104L is artificially propelled to move in a manner that mimics the displacement of vehicle representation 204L in virtual environment 202. In the example of Fig. 2, physical vehicle 104L, having been struck by a virtual weapon at position 2, is artificially deflected from its current course in physical space.”; “[0067] …the resulting effects in physical environment 201 in turn impact the dynamics or sequence of events in virtual environment 202. The above-described scenario exemplifies the tightly coupled nature of the physical and virtual environments 201, 202 in the system of the present invention. Rather than merely connecting virtual components with physical ones, various embodiments of the present invention are truly symbiotic and bi-directional, such that events and changes occurring in one state (environment) can influence events and changes happening in the other.”; and “[0068] …the system can be configured so that virtual environment 202 dominates physical environment 201, and physical environment 201 simply mirrors the events occurring in virtual environment 202; in at least one embodiment, the opposite configuration may be implemented. Any appropriate priority scheme can be established between the physical and virtual environments 201, 202.”) The Examiner notes this limitation addresses a specific representation of the real-world object’s motion into the corresponding virtual world object. It features a transformation of the virtual world’s coordinates with respect to the AR-target’s position (in the real-world) by an opposite vector of the motion vector. This feature pertains to mapping of parameters from known or pre-determined input mechanisms to parameters of a virtual application. Sofman teaches the motion of the real-world remote-controlled cars can be mapped into a motion in the virtual world according to Newtonian laws of physics and given any arbitrary rules as set by the game designer.
Doptis, Das, and Sofman are analogous art as all are related to recognition of moveable real-world objects.
Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Doptis by transforming the virtual world coordinates with respect to the updated AR-target position by an opposite vector of the motion vector, thereby representing a movement of the real-world object in respect of the virtual world objects according to the motion of the real-world object as taught by Sofman and use that within Doptis augmented reality environment of AR platforms.
The motivation for the above is for creating accurate representations of moving physical objects in the virtual world to further enhance the immersive experience of users.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable and obvious over Doptis modified by Das and Sofman, as applied to claim 5 above, in view of US Patent Application Publication US 2018/0336729 A1, (Prideaux-Ghee et al.) (hereinafter Prideaux-Ghee), and further in view of Daniel Wagner et al., "ARToolKitPlus for Pose Tracking on Mobile Devices", 2007 (hereinafter Wagner).
Regarding claim 6, Das and Sofman are silent about wherein the toy system further comprises a user input device operatively connected to the processing device and configured to obtain user steering input; and wherein generating AR content associated with the moveable real-world object further comprises rotating the virtual world coordinates with respect to the AR-target by applying a rotation opposite to the user steering input around a corresponding steering axis passing through the updated AR-target position, thereby representing steering of the movement of the real-world object in respect of the virtual world objects according to the user steering input.
Doptis teaches wherein the toy system further comprises a user input device operatively connected to the processing device and configured to obtain user steering input; (Doptis “[0025] The mobile devices 110 and 112 may also incorporate physical controls (e.g. buttons and analog sticks) for receiving control input from the users 115 and 117. Alternatively, on-screen controls (e.g. capacitive or resistive on-display virtual buttons or joysticks) may operate as controls for the AR platforms 120 and 122. The mobile devices 110 and 112 may also have speakers and/or the capability to integrate with headphones to provide sound from associated augmented reality or virtual reality software operating on the mobile devices 110 and 112.” and “[0027] …may be controlled by a remote control…”) thereby representing steering of the movement of the real-world object in respect of the virtual world objects according to the user steering input; (Doptis “[0025] The mobile devices 110 and 112 may also incorporate physical controls (e.g. buttons and analog sticks) for receiving control input from the users 115 and 117. Alternatively, on-screen controls (e.g. capacitive or resistive on-display virtual buttons or joysticks) may operate as controls for the AR platforms 120 and 122. The mobile devices 110 and 112 may also have speakers and/or the capability to integrate with headphones to provide sound from associated augmented reality or virtual reality software operating on the mobile devices 110 and 112.” and “[0027] …may be controlled by a remote control…”).
However, Doptis, Das, and Sofman are silent about wherein generating AR content associated with the moveable real-world object further comprises rotating the virtual world coordinates with respect to the AR-target by applying a rotation opposite to the user steering input around a corresponding steering axis passing through the updated AR-target position.
Prideaux-Ghee teaches and wherein generating AR content associated with the moveable real-world object further comprises rotating the virtual world coordinates with respect to the AR-target (Prideaux-Ghee “[0037] Multiple images may be captured during the relative motion and…a DT may be mapped to (e.g., associated with) the object in each image…the DT may track motion of the object in real-time, thereby allowing for interaction with the object via an image from different perspectives and in real-time.”; “[0043] …to identify the orientation, process 200 identifies one or more features of the object, such as wheel 106 in the loader of FIG. 1.”; “[0051] …instead of or in addition to the user selecting a point of the image by touching a screen or selecting with a pointer, the user interface showing the object can be augmented with a set of visual crosshairs or a target that can remain stationary, such as in the center, relative to the user interface (not illustrated). The user can select a part of the object by manipulating the camera's field of view such that the target points to any point of interest on the object. The process 200 can be configured to continually and/or repeatedly analyze the point in the image under the target to identify any part or parts of the object that correspond to the point under the target.”; and “[0044] That is, the 3D graphical model is oriented in 3D coordinate space so that its features align to identified features of the image. In a state of alignment with the object in the image, the 3D graphical model may be at specified angle(s) relative to axes in the 3D coordinate space. These angles(s) define the orientation of the 3D graphical model and, thus, also define the orientation of the object in the image relative to the camera that captured the image.”)
However, Doptis, Das, Sofman, and Prideaux-Ghee are silent about by applying a rotation opposite to the user steering input around a corresponding steering axis passing through the updated AR-target position.
Wagner teaches by applying a rotation opposite to the user steering input around a corresponding steering axis passing through the updated AR-target position, (Wagner Figure 2, “(pg. 2) …ARToolKit uses the marker’s edges for a first, coarse pose detection. In the next step the rotation part of the estimated pose is refined iteratively using matrix fitting. The resulting pose matrix defines a transformation from the camera plane to a local coordinate system in the centre of the marker (see bottom right picture in Figure 2). An application can use these matrices for rendering 3D objects accurately on top of fiducial markers. The final image is displayed on the device’s screen (bottom left picture).”) The Examiner notes the limitation pertains to a specific mapping of the real-world object’s position (in the local space) into its corresponding virtual world coordinates (in the world space). In particular, translation is performed in the opposite rotation to user steering. This means mapping of parameters from given input to parameters of an AR game. Therefore, one skilled in the art would understand that it is common to map the functionalities of an AR app to given input through frameworks such as SLAM, ARKit, ARCore, ARToolKitPlus and/or virtual reality engines such as Unreal Engine or Unity 3D. Thus, “applying a rotation opposite to the user steering input…” can be defined in the AR game as rotating the virtual object in the opposite direction of “X”. In other words, rotate “X” given “Y”.
Doptis, Das, Sofman, Prideaux-Ghee, and Wagner are analogous art as all are related to object recognition or detection.
Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Doptis by generating AR content associated with the moveable real-world object, comprising rotating the virtual world coordinates with respect to the AR-target as taught by Prideaux-Ghee and by applying a rotation opposite to the user steering input around a corresponding steering axis passing through the updated AR-target position, as taught by Wagner and use that within Doptis augmented reality environment comprising AR platforms.
The motivation for the above is for creating accurate representations of moving physical objects in the virtual world to further enhance the immersive experience of users.
Claims 7-11 are rejected under 35 U.S.C. 103 as being unpatentable and obvious over Doptis modified by Das, Sofman, Prideaux-Ghee, and Wagner, as applied to claim 6 above, and further in view of US Patent Application Publication US 2018/0264365 A1, (SOEDERBERG et al.) (hereinafter Soederberg).
Regarding claim 7, Das, Sofman, Prideaux-Ghee, and Wagner are silent about wherein the characteristic motion pattern is provided as a mathematical function and/or as parameters, thereby defining a type and/or a shape of the motion pattern.
Doptis teaches wherein the characteristic motion pattern (Doptis “[0075] As an AR platform 320 or multiple AR platforms move about in a physical space, they are continuously capturing image data about that space. Over the course of several turns about that space, potentially laps or flights throughout around that space, the AR platforms generate sufficient image data to inform object recognition 313 of the three-dimensional and visual contents of that space.”)
However, Doptis, Das, Sofman, Prideaux-Ghee, and Wagner are silent about providing the characteristic motion pattern as a mathematical function and/or as parameters, thereby defining a type and/or a shape of the motion pattern.
Soederberg teaches is provided as a mathematical function and/or as parameters, thereby defining a type and/or a shape of the motion pattern (Soederberg Fig. 2, “[0154] … a scanning trajectory 12 while capturing image/scan data 13 of the physical model 10. The image data is processed by the processor of the mobile device 1 thereby generating a digital three-dimensional representation indicative of the physical model 10 as well as information on pre-determined physical properties, such as colour, shape and/or linear dimensions. The digital three-dimensional representation may be represented and stored in a suitable form in the mobile device, e.g. in a mesh form. The mesh data is then converted into a virtual toy construction model using a suitable algorithm, such as a mesh-to-LXFML code conversion algorithm as further detailed below. The algorithm analyses the mesh and calculates an approximated representation of the mesh as a virtual toy construction model made of virtual toy construction elements that are direct representations of corresponding physical toy construction elements.”)
Doptis, Das, Sofman, Prideaux-Ghee, Wagner, and Soederberg are analogous art as all are related to moveable objects.
Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Doptis by providing the characteristic motion pattern as a mathematical function and/or as parameters, thereby defining a type and/or a shape of the motion pattern as taught by Soederberg and use that within Doptis augmented reality environment comprising AR platforms.
The motivation for the above is for “saving” a specific motion pattern so that it may be utilized by a user.
Regarding claim 8, Doptis teaches wherein the characteristic motion includes one or more of linear motion, oscillatory motion, rotational motion, circular motion, elliptic motion, parabolic motion, projectile motion, and a motion following a pre-determined specified trajectory or path (Doptis “[0117] …the AR/VR software 311 may impose restrictions on the speed of movement or the direction (e.g. only linear movement like a platformer).”; and “[0085] The control inputs may direct the movement of the AR platform 320 so that it turns, moves vertically, beeps a horn, or otherwise changes its movement or location.”)
Regarding claim 9, Doptis teaches wherein parameters defining the characteristic motion include one or more of: a direction of motion, speed, acceleration (Doptis “[0066] Motion data is generally data generated by the IMU 316 that includes data that may be used to estimate speed, estimate motion like turning or movement vertically relative to gravity, and detect events such as flipping over, being hit, running into things, and the like. All of the foregoing may be captured by the IMU 316 for the AR platform 320. As with image data, the motion data is captured using at least one IMU 316 on one AR platform 320, but multiple AR platforms 320 may capture motion data.”)
Regarding claim 10, Doptis teaches wherein the toy system further comprises a propulsion mechanism for propelling the real-world object according to the characteristic motion pattern (Doptis Fig. 3, “[0052] …user interface/control system 319 is used to control the locomotive system 324 of the AR platform 320. The user interface/control system 319 may be or include the IMU 316, as discussed above. However, the user interface/control system 319 may be a more traditional controller or controller-style system including a series of buttons and control sticks or a keyboard and/or mouse. The user interface/control system 319 may also include software-based controllers such as on-screen buttons and movement systems that are shown on the display 314.”; “[0054] The AR platform 320 includes a control interface 321, one or more camera(s) 323, a locomotive system 324 and an IMU 326.”; and “[0055] The control interface 321 is for receiving instructions from the mobile device 310 (e.g. the user interface/control system 319) and passing those along to the locomotive system 324 to cause the AR platform 320 to move as directed by the mobile device 310.”)
Regarding claim 11, Doptis teaches wherein the propulsion mechanism comprises an internal propulsion mechanism adapted to drive the moveable real-world object (Doptis Fig. 3, “[0055] The control interface 321 is for receiving instructions from the mobile device 310 (e.g. the user interface/control system 319) and passing those along to the locomotive system 324 to cause the AR platform 320 to move as directed by the mobile device 310.”)
Claims 12-15 are rejected under 35 U.S.C. 103 as being unpatentable and obvious over Doptis modified by Das, Sofman, Prideaux-Ghee, Wagner, and Soederberg, as applied to claims 7-11 above, and further in view of US Patent Application Publication US 2015/0375128 A1, (Villar et al.) (hereinafter Villar).
Regarding claim 12, Doptis, Das, Sofman, Prideaux-Ghee, Wagner, and Soederberg are silent about wherein the propulsion mechanism comprises an external propulsion mechanism adapted to launch the moveable real-world object.
Villar teaches wherein the propulsion mechanism comprises an external propulsion mechanism adapted to launch the moveable real-world object (Villar “[0035] …the real-world toy 410 is a toy car. When a user moves the car backwards (as indicated by arrow 412), this motion is sensed by the car (and/or by sensors in the computing device) and input to the physics engine. The physics engine models the resultant motion of the corresponding virtual object and provides control signals to the toy car 410 which, when implemented by the controlling mechanism 105, cause the toy car 410 to move forwards (as indicated by arrow 414). In this way, the toy car 410 operates (as far as the user experience is concerned) as if it has a spring mechanism inside it; however, it does not and the effect is generated by the physics engine. By generating the car's response in this way, rather than having a mechanical mechanism that reacts to the user pulling the car back, the resultant motion of the car can be modified in many different ways based on characteristics in the virtual world…”; “[0037]”). The Examiner notes the “external propulsion mechanism” can be a simple as a user’s force. As stated in the Applicant’s specification on page 14, “The mechanism may include a mechanism to simply push in a certain direction with a given force…”.
Doptis, Das, Sofman, Prideaux-Ghee, Wagner, Soederberg, and Villar are analogous art as all are related to moveable objects.
Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Doptis by an external propulsion mechanism adapted to launch the moveable real-world object as taught by Villar and use that within Doptis augmented reality environment comprising AR platforms.
The motivation for the above is to enhance the overall AR experience for a user by allowing a user to direct the motion of an object.
Regarding claim 13, Doptis teaches wherein the propulsion mechanism comprises a trigger device (Doptis “[0027] …locomotive system means a system for moving the AR platform 120 or 122 from one location to another. The locomotive system includes a motor or other motion-generation element and a component that moves in response to the motion-generation element to cause the AR platform 120 or 122 to move from one location to another.”)
Regarding claim 14, Doptis, Das, Sofman, Prideaux-Ghee, Wagner, and Villar are silent about wherein the real-world object is a toy construction model constructed from modular construction elements.
Soederberg teaches wherein the real-world object is a toy construction model constructed from modular construction elements (Soederberg “[0031] The physical toy construction elements may comprise coupling members for detachably interconnecting the toy construction elements with each other. The coupling members may utilize any suitable mechanism for detachably connecting construction elements with other construction elements. In some embodiments, the coupling members comprise one or more protrusions and one or more cavities, each cavity being adapted to receive at least one of the protrusions in a frictional engagement.”; and “[0246] …a physical toy construction model 2903 of a car.")
Doptis, Das, Sofman, Prideaux-Ghee, Wagner, Soederberg, and Villar are analogous art as all are related to moveable objects.
Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Doptis by using a toy construction model constructed from modular construction elements as taught by Soederberg and use that within Doptis augmented reality comprising AR platforms.
The motivation for the above is for allowing a user to construct or build their preferred toy models and therefore improving the overall play experience.
Regarding claim 15, Doptis teaches wherein the real-world object is a toy vehicle model adapted to only travel in a straight line (Doptis “[0117] …the AR/VR software 311 may impose restrictions on the speed of movement or the direction (e.g. only linear movement like a platformer).”)
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Chapter 3: The Geometry of Virtual Worlds by Steven M. LaValle (2020) discloses the geometry of objects in the virtual world, specifically, translation (changing position) and rotation (changing orientation).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMELIA VELAZQUEZ VALENCIA whose telephone number is (571)272-7418. The examiner can normally be reached M-F, 7:30AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at (571) 272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.V.V/Examiner, Art Unit 2612
/Said Broome/Supervisory Patent Examiner, Art Unit 2612
Date: 2/9/2026