Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/29/2025 has been entered.
Response to Amendment
This is in response to applicant’s amendment/response filed on 10/292025, which has been entered and made of record. Claims 1, 11, 16, 22, 24-25 have been amended. Claim 2, 6-10, 21, 23 have been cancelled. No claim has been added. Claims 1, 3-5, 11-20, 22, 24-25 are pending in the application.
Response to Arguments
Applicant’s arguments on 10/29/2025 have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 11-20, 22, 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Nocon et al. (US Pub 2020/0058168 A1) in view of Fonken (US Pub 2019/0042001 A1), Touma et al. (US Pub 2006/0092133 A1) and Veeramani et al. (US Pub 2018/0284982 A1).
As to claim 1, Nocon discloses a method for virtualizing an input device, comprising:
acquiring data of the input device (Nocon, ¶0059, “The peripheral device 400 also includes the IMU 412. The IMU 412 includes a nine degree of freedom sensor which may use information received from an accelerometer 414, a gyroscope 416, and a magnetometer 418. The IMU 412 senses the orientation and movement of the peripheral device 400, to facilitate projection of the virtual blade on the head mounted display 300. The accelerometer 414 measures acceleration forces stemming from movement of the peripheral device 400 in the user's physical environment. The gyroscope 416 measures orientation of peripheral device 400 in the user's physical environment. The magnetometer 418 measures properties of a magnetic field in the user's physical environment. The accelerometer 414, gyroscope 416, and magnetometer 418 are merely examples of sensors that can be included within the IMU 412. In an embodiment, the IMU can include additional suitable sensors, or can include fewer sensors.”),
wherein the input device comprises information, an input signal, or an image of the input device (Nocon, ¶0005-0007, ¶0045, 0073-0074).
determining target information of a three-dimensional model corresponding to the input device in a virtual reality system based on the data of the input device (Nocon, Fig. 5, ¶0063, “FIG. 5 is an illustration of alignment of a virtual object with a physical object in an AR/VR application, according to an embodiment. As discussed above, in an AR or VR application, it is common to display a virtual object on the same screen as one or more physical objects. As illustrated in FIG. 5, a physical hilt 512 can be combined in user's vision with a virtual blade 514 to form a sword or staff 510. Because the blade 514 is virtual, while the hilt 512 is physical, the AR/VR application (e.g., AR/VR application 240) aligns the virtual blade 514 with the physical hilt 512 so that the blade appears to extend straight out in the sword 510.” “IMUs in the head mounted display, peripheral device (e.g., the peripheral device 400), and user device can be used to gather measurements for aligning the virtual blade with the physical hilt.”);
(¶0007, “The operation includes receiving a first estimated attitude for an electronic peripheral for an AR or VR application. The electronic peripheral includes a first IMU, the first IMU including a first magnetometer, a first gyroscope, and a first accelerometer. The first estimated attitude is generated using data from the first IMU.” ¶0022, “This can be combined with an estimate of the attitude of the physical peripheral device, from an IMU located in the peripheral, and the estimates can be used to align the virtual object with the physical object in the AR/VR application.”);
displaying the three-dimensional model in a virtual reality scene built in the virtual reality system based on the target information of the three-dimensional model, wherein the attitude of the three-dimensional model in the virtual reality scene is the same as the attitude of the input device in a real space (Nocon, ¶0065, “to determine the attitude of the peripheral device 400 for proper alignment of a virtual object with the peripheral device 400 in the AR/VR display.” ¶0073, “align a virtual object (e.g., a virtual blade) with a physical object (e.g., the peripheral device 400) in the user's display (e.g., on the head mounted display 300). For example, the AR/VR application 240 can compare the attitude estimates with fixed vectors.” “Comparisons with these vectors can be used to align the virtual object with the physical object” ¶0074, “The AR/VR application 240 can then use attitude estimators and prediction algorithms to determine the physical object and display's respective orientations in real time.”¶0079-80);
acquiring three-dimensional data detected by an inertial sensor configured on the input device (Nocon, Fig. 6, ¶0064, “FIG. 6 is a flow chart 600 illustrating alignment of a virtual object in an AR/VR system based on IMU data, according to an embodiment. At block 602 the head mounted display 300 receives IMU data from its IMU 312. This includes data from the accelerometer 314, the gyroscope 316, and the magnetometer 318. This data can be used in block 608, discussed below, to determine the direction in which the user is facing and the attitude of the user's head for proper alignment of a virtual object with a physical object in the head mounted display 300.” ¶0065, “At block 604 a peripheral device in the AR/VR system (e.g., the peripheral device 400) receives IMU data from its IMU 412. This includes data from the accelerometer 414, the gyroscope 416, and the magnetometer 418. This data can also be used in block 608, discussed below, to determine the attitude of the peripheral device 400 for proper alignment of a virtual object with the peripheral device 400 in the AR/VR display.”);
updating the target information of the three-dimensional model in the virtual reality system according to the three-dimensional data acquired by the inertial sensor (Nocon, Fig. 8, ¶0073, “the AR/VR application (e.g., the AR/VR application 240 in the user device 200 or the AR/VR application 340 in the head mounted display) uses the estimates from the attitude estimators (e.g., the attitude estimator 442 in the peripheral and the attitude estimator 342 in the head mounted display) to align a virtual object (e.g., a virtual blade) with a physical object (e.g., the peripheral device 400) in the user's display (e.g., on the head mounted display 300). For example, the AR/VR application 240 can compare the attitude estimates with fixed vectors. The AR/VR application 240 can estimate a gravity vector, using a gyroscope (e.g., the gyroscope 316 or 416) and an accelerometer (e.g., the accelerometer 314 or 414). The AR/VR application 240 can also estimate a vector for magnetic north, using a magnetometer (e.g., magnetometer 318 or 418) and the attitude estimations. Comparisons with these vectors can be used to align the virtual object with the physical object.” ¶0074, “the AR/VR application 240 aligns the virtual object (e.g., a virtual blade) with the physical object (e.g., the peripheral device 400) by calculating the yaw, pitch, and roll orientation of the physical object relative to the display (e.g., the head mounted display 300). The AR/VR application 240 then measures and calculates the X, Y, Z position of the physical object relative to the display, and uses the resulting combination to align the virtual object with the physical object. In an embodiment, the AR/VR application 240 calculates the yaw, pitch, and roll orientation of the physical object relative to the display using the nine degree of freedom IMU sensors. The AR/VR application 240 uses the IMU in the physical object (e.g., the IMU 412 in the peripheral 400) to measure values relative to magnetic north and gravity, and uses the IMU sensors in the display (e.g., the IMU 312 in the head mounted display 300) to measure values relative to magnetic north and gravity. The AR/VR application 240 can then use attitude estimators and prediction algorithms to determine the physical object and display's respective orientations in real time.” ¶0094, “an AR/VR application (e.g., the AR/VR application 240 in the user device 200) renders a virtual object in a display (e.g., the head mounted display 300) that also depicts a physical object. For example, the AR/VR application 240 can render the virtual blade 514 with the physical hilt 512 so that the virtual blade 514 appears to extend from the hilt 512. But over time, the display of the virtual object can drift, and so the alignment must be corrected.”); and
mapping the three-dimensional model into the virtual reality scene based on the updated target information, and displaying the three-dimensional model in the virtual reality scene based on the updated target information, and wherein updating the target information of the three- dimensional model in the virtual reality system according to the three-dimensional data detected by the inertial sensor comprises: establishing a corresponding relationship between the inertial sensor on the input device and the target space according to three-dimensional magnetic force data, three- dimensional acceleration data, and three-dimensional gyroscope data collected by the inertial sensor and(Nocon, abstract, “An electronic peripheral includes a first inertial measurement unit (“IMU”). A head mounted display includes a second IMU. An estimated attitude for the electronic peripheral is generated using data from the first IMU. An estimated attitude for the head mounted display is generated using data from the second IMU. An orientation of a virtual object is determined based on the estimated first and second attitudes, such that the virtual object is aligned with an object in a user's physical environment when the virtual object is displayed to the user.” ¶0019, “an AR/VR video game might include a physical hilt as a controller, to be held by the user, from which a virtual blade could project. As the user rotates the physical hilt, the AR/VR application should change the orientation of the virtual blade, so that the virtual blade appears to the user to extend out from the physical hilt.” ¶0045, “the peripheral device 400 acts as a game controller, simulating a sword or staff to the user.” Fig. 6, ¶0064, “FIG. 6 is a flow chart 600 illustrating alignment of a virtual object in an AR/VR system based on IMU data, according to an embodiment. At block 602 the head mounted display 300 receives IMU data from its IMU 312. This includes data from the accelerometer 314, the gyroscope 316, and the magnetometer 318. This data can be used in block 608, discussed below, to determine the direction in which the user is facing and the attitude of the user's head for proper alignment of a virtual object with a physical object in the head mounted display 300.” ¶0065, “At block 604 a peripheral device in the AR/VR system (e.g., the peripheral device 400) receives IMU data from its IMU 412. This includes data from the accelerometer 414, the gyroscope 416, and the magnetometer 418. This data can also be used in block 608, discussed below, to determine the attitude of the peripheral device 400 for proper alignment of a virtual object with the peripheral device 400 in the AR/VR display.” ¶0067, “At block 608, the attitude estimator module uses IMU data to estimate the attitude of the peripheral device 400 and the head mounted display 300. In an embodiment, the attitude estimator module (e.g., the attitude estimator module 242, 342, or 442) uses nine degree of freedom IMU data (e.g., from the IMU 212, 312, or 412) to estimate the orientation of the device in which the IMU is located, relative to magnetic north and gravity. This can be done by determining yaw, pitch, and roll. The magnetometer in the IMU (e.g., the magnetometer 218, 318, or 418) can be used to determine yaw, and the accelerometer (e.g., the accelerometer 214, 314, or 414) and gyroscope (e.g., the gyroscope 216, 316, or 416) can be used to determine pitch and roll. In an embodiment, the attitude estimator module uses a complementary filter or a fixed gain Kalman filter to filter the magnetometer data and correct for yaw.”).
Nocon does not explicitly discloses a spatial position of the inertial sensor relative to the input device. However, such offset is well known to one of ordinary skill in the art. The claim would have been obvious because the technique for improving a particular class of devices was part of the ordinary capabilities of a person of ordinary skill in the art, in view of the teaching of the technique for improvement in other situations.
Fonken teaches a spatial position of the inertial sensor relative to the input device (Fonken, Fig. 12, ¶0072, “H.sub.c is the pre-calculated rotation offset between the inertial measurement units (48A, 48B, 49) and the sensor plane (P) of the beacon sensing device (46).”).
Nocon and Fonken are considered to be analogous art because all pertain to 3D input device. It would have been obvious before the effective filing date of the claimed invention to have modified Nocon with the features of “a spatial position of the inertial sensor relative to the input device” as taught by Fonken. The suggestion/motivation would have been in order to assist the user to operate software with better precision and accuracy (Fonken, ¶0001).
The combination of Nocon and Fonken does not explicitly disclose at least one of a mouse or a keyboard and wherein the three- dimensional model comprises a mouse model corresponding to the mouse or a keyboard model corresponding to the keyboard in the virtual reality system.
Touma teaches at least one of a mouse or a keyboard (Touman, ¶0013, “FIG. 4 shows a block diagram of the 3D Mouse/Controller system and the way it interacts with a 3D application on the computer monitor, through interrelated modules performing the different functions of: Movement Sensing, Sensing data interpretation and conversion to digital data, Wireless Communication of the data to an interface, Graphical rendering of the data in a 3D application.”).
Nocon, Fonken and Touma are considered to be analogous art because all pertain to 3D input device. It would have been obvious before the effective filing date of the claimed invention to have modified Nocon with the features of “at least one of a mouse or a keyboard” as taught by Touma. The claim would have been obvious because the technique for improving a particular class of devices was part of the ordinary capabilities of a person of ordinary skill in the art, in view of the teaching of the technique for improvement in other situations.
Veeramani et al teaches at least one of a mouse or a keyboard and wherein the three- dimensional model comprises a mouse model corresponding to the mouse or a keyboard model corresponding to the keyboard in the virtual reality system (Verramani, ¶0036, “The image generator may be configured to generate an image of a virtual representation of the physical input device based on the position of the physical input device relative to the user, and to generate an image of a virtual hand based on the determined position of the user's hand relative to the physical input device. In some embodiments, the image generator 21 may be configured to load a three-dimensional model of the virtual input device based on the identified characteristic of the physical input device. In any of the embodiments herein, non-limiting examples of the virtual input device may include any of a virtual keyboard, a virtual mouse, a virtual touchpad, a virtual stylus, and a virtual scroll wheel.” ¶0038, “The input generator 23 may also be configured to generate signals through the IO interface based on information from the gesture tracker 22 and/or the device tracker 24 that correspond to movement of an input device (e.g. such as moving a physical mouse or virtually moving a virtual mouse).” ¶0041, “loading a three-dimensional model of the virtual input device based on the identified characteristic of the physical input device at block 44. For example, the virtual input device may include one of a virtual keyboard, a virtual mouse, a virtual touchpad, a virtual stylus, and a virtual scroll wheel at block 45, among other HIDs.” ¶0047, “render a 3D representation of the actual keyboard/mouse model that a user has setup (e.g. or which has been auto-detected). The virtual keyboard may be drawn in the virtual space close to where the user's finger would be if they were inside the virtual space. Virtual hands/fingers may also be rendered graphically, positioned appropriately on the keys/mouse the user wants to interact with.”).
Nocon, Fonken, Touma and Veeramani are considered to be analogous art because all pertain to 3D input device. It would have been obvious before the effective filing date of the claimed invention to have modified Nocon with the features of “at least one of a mouse or a keyboard and wherein the three- dimensional model comprises a mouse model corresponding to the mouse or a keyboard model corresponding to the keyboard in the virtual reality system” as taught by Veeramani. The claim would have been obvious because the technique for improving a particular class of devices was part of the ordinary capabilities of a person of ordinary skill in the art, in view of the teaching of the technique for improvement in other situations.
As to claim 2, claim 1 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
wherein a corresponding relationship between the inertial sensor on the input device and a target space is established according to the three-dimensional magnetic force data, the three- dimensional acceleration data, and the three-dimensional gyroscope data collected by the inertial sensor and the spatial position of the inertial sensor relative to the input device to update the attitude information of the three-dimensional model in the virtual reality system (Nocon, ¶0073, “At block 610, the AR/VR application (e.g., the AR/VR application 240 in the user device 200 or the AR/VR application 340 in the head mounted display) uses the estimates from the attitude estimators (e.g., the attitude estimator 442 in the peripheral and the attitude estimator 342 in the head mounted display) to align a virtual object (e.g., a virtual blade) with a physical object (e.g., the peripheral device 400) in the user's display (e.g., on the head mounted display 300). For example, the AR/VR application 240 can compare the attitude estimates with fixed vectors. The AR/VR application 240 can estimate a gravity vector, using a gyroscope (e.g., the gyroscope 316 or 416) and an accelerometer (e.g., the accelerometer 314 or 414). The AR/VR application 240 can also estimate a vector for magnetic north, using a magnetometer (e.g., magnetometer 318 or 418) and the attitude estimations. Comparisons with these vectors can be used to align the virtual object with the physical object.” ¶0074, “the AR/VR application 240 aligns the virtual object (e.g., a virtual blade) with the physical object (e.g., the peripheral device 400) by calculating the yaw, pitch, and roll orientation of the physical object relative to the display (e.g., the head mounted display 300). The AR/VR application 240 then measures and calculates the X, Y, Z position of the physical object relative to the display, and uses the resulting combination to align the virtual object with the physical object. In an embodiment, the AR/VR application 240 calculates the yaw, pitch, and roll orientation of the physical object relative to the display using the nine degree of freedom IMU sensors. The AR/VR application 240 uses the IMU in the physical object (e.g., the IMU 412 in the peripheral 400) to measure values relative to magnetic north and gravity, and uses the IMU sensors in the display (e.g., the IMU 312 in the head mounted display 300) to measure values relative to magnetic north and gravity. The AR/VR application 240 can then use attitude estimators and prediction algorithms to determine the physical object and display's respective orientations in real time.”).
As to claim 3, claim 1 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses the target information comprises spatial position information, and wherein updating the target information of the three- dimensional model in the virtual reality system according to the three-dimensional data detected by the inertial sensor comprises:
using spatial position information of the three-dimensional model in the virtual reality system as an initial spatial position (Nocon, ¶0005, “determining an orientation of a virtual object for display on the head mounted display based on the estimated first and second attitudes, such that the virtual object is aligned with an object in a user's physical environment when the virtual object is displayed to the user.”);
calculating an amount of relative position movement of the input device in each of three directions of a spatial coordinate system according to three-dimensional magnetic force data, three-dimensional acceleration data, and three-dimensional gyroscope data collected by the inertial sensor (Nocon, Fig. 6, ¶0067, “the attitude estimator module (e.g., the attitude estimator module 242, 342, or 442) uses nine degree of freedom IMU data (e.g., from the IMU 212, 312, or 412) to estimate the orientation of the device in which the IMU is located, relative to magnetic north and gravity. This can be done by determining yaw, pitch, and roll. The magnetometer in the IMU (e.g., the magnetometer 218, 318, or 418) can be used to determine yaw, and the accelerometer (e.g., the accelerometer 214, 314, or 414) and gyroscope (e.g., the gyroscope 216, 316, or 416) can be used to determine pitch and roll. In an embodiment, the attitude estimator module uses a complementary filter or a fixed gain Kalman filter to filter the magnetometer data and correct for yaw.”); and
updating the spatial position information of the three-dimensional model in the virtual reality system according to the initial spatial position and the amount of relative position movement of the input device in each of the three directions of the spatial coordinate system (Nocon, ¶0074, “the AR/VR application 240 aligns the virtual object (e.g., a virtual blade) with the physical object (e.g., the peripheral device 400) by calculating the yaw, pitch, and roll orientation of the physical object relative to the display (e.g., the head mounted display 300). The AR/VR application 240 then measures and calculates the X, Y, Z position of the physical object relative to the display, and uses the resulting combination to align the virtual object with the physical object. In an embodiment, the AR/VR application 240 calculates the yaw, pitch, and roll orientation of the physical object relative to the display using the nine degree of freedom IMU sensors. The AR/VR application 240 uses the IMU in the physical object (e.g., the IMU 412 in the peripheral 400) to measure values relative to magnetic north and gravity, and uses the IMU sensors in the display (e.g., the IMU 312 in the head mounted display 300) to measure values relative to magnetic north and gravity. The AR/VR application 240 can then use attitude estimators and prediction algorithms to determine the physical object and display's respective orientations in real time.”).
As to claim 4, claim 3 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses the method further comprises:
updating the initial spatial position; and correcting a calculation error according to the updated initial spatial position (Nocon, Fig. 7, ¶0080, “Measured IMU data from one or more of the IMUs 412 (in the peripheral device 400), 212 (in the user device 200), and 312 (in the head mounted display 300), can then be used to correct any inaccuracies in the predicted movement of the peripheral device 400 or the head mounted display 300.” ¶0081, “a peripheral device in the AR/VR system (e.g., the peripheral device 400) receives IMU data from its IMU 412. This includes data from the accelerometer 414, the gyroscope 416, and the magnetometer 418. This data can also be used in block 708, discussed below, to determine the attitude of the peripheral device 400 for correction of a predicted position of a virtual object during a movement by the user.”. ¶0083, “during a rapid movement of the peripheral 400 or the head mounted display 300 (or both) the attitude estimator module 242 can predict the attitude of each device during the movement and can align the virtual object (e.g., the virtual blade 514) with the physical object (e.g., the hilt 512) during the movement. At block 710, the estimates from block 708 are used to correct this prediction.” ¶0084, “an AR/VR application (e.g., the AR/VR application 240 in the user device 200) renders a virtual object in a display (e.g., the head mounted display 300) that also depicts a physical object. For example, the AR/VR application 240 can render the virtual blade 514 with the physical hilt 512 so that the virtual blade 514 appears to extend from the hilt 512. But over time, the display of the virtual object can drift, and so the alignment must be corrected.” ¶0085, “the AR/VR application 240 uses the new estimated attitudes to re-render the virtual object in the display with the physical object. This can correct any drift or other mis-alignment errors.”)
As to claim 5, claim 1 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
the inertial sensor is positioned on a surface of the input device or inside the input device (Nocon, Fig. 1, ¶0074, “The AR/VR application 240 uses the IMU in the physical object (e.g., the IMU 412 in the peripheral 400)” Fonken, Fig. 1. The claim would have been obvious because “a person of ordinary skill has good reason to pursue the known options within his or her technical grasp. If this leads to the anticipated success, it is likely the product not of innovation but of ordinary skill and common sense.”).
As to claim 11, the combination of Nocon, Fonken, Touma and Veeramani discloses
an apparatus for virtualizing an input device, comprising: a first acquisition unit configured to acquire data of the input device, wherein the input device comprises at least one of a mouse or a keyboard, and wherein the data of the input device comprises at least one of configuration information, an input signal, or an image of the input device; a determination unit configured to determine target information of a three-dimensional model corresponding to the input device in a virtual reality system based on the data of the input device, wherein the three-dimensional model comprises a mouse model corresponding to the mouse or a keyboard model corresponding to the keyboard in the virtual reality system, wherein the target information comprises attitude information of the three-dimensional model in a target space, wherein the three-dimensional model is displayed in a virtual reality scene built in the virtual reality system based on the target information of the three-dimensional model, wherein the attitude of the three-dimensional model in the virtual reality scene is the same as the attitude of the input device in a real space; a second acquisition unit configured to acquire three-dimensional data of an inertial sensor configured on the input device; an updating unit configured to update the target information of the three-dimensional model in the virtual reality system according to the three-dimensional data of the inertial sensor; and a mapping unit configured to map the three-dimensional model into a virtual reality scene corresponding to the virtual reality system based on the updated target information, wherein the target information comprises attitude information, and wherein the updating unit is further configured to establish a corresponding relationship between the inertial sensor on the input device and a target space according to three-dimensional magnetic force data, three-dimensional acceleration data and three-dimensional gyroscope data collected by the inertial sensor and a spatial position of the inertial sensor relative to the input device, and update the attitude information of the three-dimensional model in the virtual reality system based on the corresponding relationship. (See claim 1 for detailed analysis.).
As to claim 12, claim 11 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
establish a corresponding relationship between the inertial sensor on the input device and a target space according to the three-dimensional magnetic force data, the three-dimensional acceleration data and the three-dimensional gyroscope data collected by the inertial sensor and the spatial position of the inertial sensor relative to the input device to update the attitude information of the three-dimensional model in the virtual reality system (See claim 2 for detailed analysis.).
As to claim 13, claim 11 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
the target information comprises spatial position information, and wherein the updating unit is further configured to: use spatial position information of the three-dimensional model in the virtual reality system as an initial spatial position; calculate an amount of relative position movement of the input device in three directions of a spatial coordinate system according to three-dimensional magnetic force data, three- dimensional acceleration data, and three-dimensional gyroscope data collected by the inertial sensor; and update the spatial position information of the three-dimensional model in the virtual reality system according to the initial spatial position and the amount of relative position movement of the input device in the three directions of the spatial coordinate system (See claim 3 for detailed analysis.).
As to claim 14, claim 13 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
the inertial sensor is positioned on a surface of the input device (See claim 5 for detailed analysis.).
As to claim 15, claim 13 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
the inertial sensor is positioned inside the input device (See claim 5 for detailed analysis.).
As to claim 16, the combination of Nocon, Fonken, Touma and Veeramani discloses
an electronic device, comprising: a memory; and a processor, wherein the processor is to: acquire data of an input device, wherein the input device comprises at least one of a mouse or a keyboard, and wherein the data of the input device comprises at least one of configuration information, an input signal, or an image of the input device; determine target information of a three-dimensional model corresponding to the input device in a virtual reality system based on the data of the input device, wherein the three-dimensional model comprises a mouse model corresponding to the mouse or a keyboard model corresponding to the keyboard in the virtual reality system, wherein the target information comprises attitude information of the three-dimensional model in a target space; displaying the three-dimensional model in a virtual reality scene built in the virtual reality system based on the target information of the three-dimensional model, wherein the attitude of the three-dimensional model in the virtual reality scene is the same as the attitude of the input device in a real space; acquire three-dimensional data detected by an inertial sensor configured on the input device; update the target information of the three-dimensional model in the virtual reality system according to the three-dimensional data acquired by the inertial sensor; and map the three-dimensional model into a virtual reality scene corresponding to the virtual reality system based on the updated target information and displaying the three- dimensional model in the virtual reality scene based on the updated target information, wherein the target information comprises attitude information, and wherein the processor is further configured to establish a corresponding relationship between the inertial sensor on the input device and a target space according to three-dimensional magnetic force data, three-dimensional acceleration data and three-dimensional gyroscope data collected by the inertial sensor and a spatial position of the inertial sensor relative to the input device, and update the attitude information of the three-dimensional model in the virtual reality scene based on the corresponding relationship. (See claim 1 for detailed analysis.).
As to claim 17, claim 16 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
establish a corresponding relationship between the inertial sensor on the input device and a target space according to the three-dimensional magnetic force data, the three-dimensional acceleration data and the three-dimensional gyroscope data collected by the inertial sensor and the spatial position of the inertial sensor relative to the input device to update the attitude information of the three-dimensional model in the virtual reality system (See claim 2 for detailed analysis.).
As to claim 18, claim 16 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
the target information comprises spatial position information, and wherein the processor is further to: use spatial position information of the three-dimensional model in the virtual reality system as an initial spatial position; calculate an amount of relative position movement of the input device in three directions of a spatial coordinate system according to three-dimensional magnetic force data, three- dimensional acceleration data, and three-dimensional gyroscope data collected by the inertial sensor; and update the spatial position information of the three-dimensional model in the virtual reality system according to the initial spatial position and the amount of relative position movement of the input device in the three directions of the spatial coordinate system (See claim 3 for detailed analysis.).
As to claim 19, claim 16 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
the inertial sensor is positioned on a surface of the input device (See claim 5 for detailed analysis.).
As to claim 20, claim 16 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
the inertial sensor is positioned inside the input device (See claim 5 for detailed analysis.).
As to claim 22, claim 2 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
The virtual reality scene is displayed in the virtual system and is a scene in the target space (Nocon, ¶0004, “Certain devices may be configured to insert virtual objects into the captured images before the images are displayed. Some devices may allow users to manipulate the virtual objects being displayed by, for example, moving the device or manipulating a joystick or buttons. This is commonly referred to as an augmented reality (AR) or virtual reality (VR) video game.” ¶0018, “Embodiments herein describe aligning virtual objects in an AR/VR application. As described above, AR/VR applications can involve the insertion of virtual objects into images of a physical, real world scene. The combination of virtual and real world objects can then be displayed to a user. And the user can manipulate the virtual objects through the use of a peripheral device.”. Fig. 5, ¶0063, “a physical hilt 512 can be combined in user's vision with a virtual blade 514 to form a sword or staff 510. Because the blade 514 is virtual, while the hilt 512 is physical, the AR/VR application (e.g., AR/VR application 240) aligns the virtual blade 514 with the physical hilt 512 so that the blade appears to extend straight out in the sword 510.” ¶0074, “the AR/VR application 240 aligns the virtual object (e.g., a virtual blade) with the physical object (e.g., the peripheral device 400) by calculating the yaw, pitch, and roll orientation of the physical object relative to the display (e.g., the head mounted display 300). The AR/VR application 240 then measures and calculates the X, Y, Z position of the physical object relative to the display, and uses the resulting combination to align the virtual object with the physical object. In an embodiment, the AR/VR application 240 calculates the yaw, pitch, and roll orientation of the physical object relative to the display using the nine degree of freedom IMU sensors. The AR/VR application 240 uses the IMU in the physical object (e.g., the IMU 412 in the peripheral 400) to measure values relative to magnetic north and gravity, and uses the IMU sensors in the display (e.g., the IMU 312 in the head mounted display 300) to measure values relative to magnetic north and gravity. The AR/VR application 240 can then use attitude estimators and prediction algorithms to determine the physical object and display's respective orientations in real time.”).
As to claim 24, claim 12 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
the virtual reality scene is displayed in the virtual system and is a scene in the target space (See claim 22 for detailed analysis.).
As to claim 25, claim 17 is incorporated and the combination of Nocon, Fonken, Touma and Veeramani discloses
The virtual reality scene displayed in the virtual system and is a scene in the target space (See claim 22 for detailed analysis.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Simmons (US Pub 2019/0212825 A1) teaches the haptic feedback device is being used as a chisel tool in a virtual clay sculpting application.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YU CHEN/Primary Examiner, Art Unit 2613