DETAILED ACTION
Allowable Subject Matter
Claims 12 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 13, 15 and 19 are rejected under 35 U.S.C. 101 as claiming the same invention as that of claims 1-2 and 10-11 of prior U.S. Patent No. 12,265,665. Although the claims at issue are not identical, they are not patentably distinct from each other because they are substantial the same in scope as listed below.
Copending US PAT. 12,265,665
Instant application
Claim 1, a method comprising:
at a computing system including non-transitory memory and one or more processors, wherein the computing system is communicatively coupled to a display device and one or more input devices via a communication interface: determining a location for virtual content; detecting a user interaction with the virtual content;
in response to detecting the user interaction with the virtual content, determining a position of a hand gesture during the user interaction with the virtual content;
in accordance with a determination that the position of the hand gesture is within a threshold distance relative to the location of the virtual content, generating corrected hand tracking data associated with the user interaction with the virtual content and corresponding to a direct interaction with the virtual content; and in accordance with a determination that the position of the hand gesture is outside of the threshold distance relative to the location of the virtual content, generating uncorrected hand tracking data associated with the user interaction with the virtual content and corresponding to an indirect interaction with the virtual content.
Claim 2, the method of claim 1, wherein generating corrected hand tracking data associated with the user interaction with the virtual content includes: obtaining uncorrected hand tracking data associated with a current time period and uncorrected hand tracking data associated with a previous time period; obtaining corrected hand tracking data associated with the current time period; generating differential hand tracking data by performing a difference between the corrected hand tracking data associated with the current time period and the uncorrected hand tracking data associated with the previous time period; generating temporally smoothed differential hand tracking data by performing temporal smoothing on the differential hand tracking data based on a depth map of a physical environment for the current time period; and generating output hand tracking data by performing a difference between the temporally smoothed differential hand tracking data and the uncorrected hand tracking data associated with the current time period, wherein the output hand tracking data corresponds to the corrected hand tracking data associated with the user interaction with the virtual content.
Claim 1, a method comprising:
at a computing system including non-transitory memory and one or more processors, wherein the computing system is communicatively coupled to a display device and one or more input devices via a communication interface: detecting a user interaction with virtual content; in accordance with a determination that a position of a hand is within a threshold distance of the virtual content, rendering the user interaction with the virtual content by correcting the position of the hand and rendering the user interaction with the virtual content based on the corrected position of the hand; and
in accordance with a determination that the position of the hand is outside of the threshold distance of the virtual content, rendering the user interaction with the virtual content without correcting the position of the hand.
Claim 13, the method of claim 1, wherein correcting the position of the hand includes performing temporal smoothing.
Claim 10, A computing system comprising:
one or more processors;
a non-transitory memory;
an interface for communicating with a display device and one or more input devices; and
one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the computing system to:
determine a location for virtual content; detect a user interaction with the virtual content;
in response to detecting the user interaction with the virtual content, determine a position of a hand gesture during the user interaction with the virtual content;
in accordance with a determination that the position of the hand gesture is within a threshold distance relative to the location of the virtual content, generate corrected hand tracking data associated with the user interaction with the virtual content and corresponding to a direct interaction with the virtual content; and
in accordance with a determination that the position of the hand gesture is outside of the threshold distance relative to the location of the virtual content, generate uncorrected hand tracking data associated with the user interaction with the virtual content and corresponding to an indirect interaction with the virtual content.
Claim 11, the computing system of claim 10, wherein generating corrected hand tracking data associated with the user interaction with the virtual content includes:
obtaining uncorrected hand tracking data associated with a current time period and uncorrected hand tracking data associated with a previous time period;
obtaining corrected hand tracking data associated with the current time period;
generating differential hand tracking data by performing a difference between the corrected hand tracking data associated with the current time period and the uncorrected hand tracking data associated with the previous time period;
generating temporally smoothed differential hand tracking data by performing temporal smoothing on the differential hand tracking data based on a depth map of a physical environment for the current time period; and
generating output hand tracking data by performing a difference between the temporally smoothed differential hand tracking data and the uncorrected hand tracking data associated with the current time period, wherein the output hand tracking data corresponds to the corrected hand tracking data associated with the user interaction with the virtual content.
Claim 15, A computing system comprising:
one or more processors;
a non-transitory memory;
an interface for communicating with a display device and one or more input devices; and
one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the computing system to:
detect a user interaction with virtual content;
in accordance with a determination that a position of a hand is within a threshold distance of the virtual content, render the user interaction with the virtual content by correcting the position of the hand and rendering the user interaction with the virtual content based on the corrected position of the hand; and
in accordance with a determination that the position of the hand is outside of the threshold distance of the virtual content, render the user interaction with the virtual content without correcting the position of the hand.
Claim 19, The computing system of claim 15, wherein correcting the position of the hand includes performing temporal smoothing.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4 and 8-11, 14-16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ravasz et al. (US Pub: 2021/0090337 A1) in view of Wada et al. (US Pub: 2021/0142049 A1).
As to claim 1, Ravasz teaches a method (i.e. the interactive augmented reality method of Ravasz figure 1-13 embodiment) (see Fig. 1-13, [0100-0104]) comprising:
at a computing system including non-transitory memory and one or more processors (i.e. as seen in figures 1 and 3 embodiment the computer system 100 is demonstrated to use memory 10 and processor 110 to implement the augmented reality system) (see Fig. 1, 3, [0065-0067]), wherein the computing system is communicatively coupled to a display device and one or more input devices via a communication interface (as seen in figure 1-3 embodiment of Ravasz, the computer network model in which a local computer device along with the network resource is able to implement a display and hand gesture input systems) (see Fig. 1-3, [0052-0067]):
detecting a user interaction with virtual content (i.e. as seen in figure 13 embodiment the computer system detects user’s hand movement as well as the finger moving gesture toward the object 1308 for virtual content interaction) (see Fig. 13, [0104]);
in accordance with a determination that a position of a hand is directing toward the direction of the virtual content, rendering the user interaction with the virtual content by correcting the position of the hand and rendering the user interaction with the virtual content based on the corrected position of the hand (i.e. as seen in figure 12-13 the user’s finger and hand movement when directed toward the virtual object 1308 is able to meet a threshold value a snap function is able to capture the virtual object that is closes to the direction of movement) (see Fig. 12-13, [0101-0104]); and
in accordance with a determination that the position of the hand is outside of the threshold indication of the virtual content, rendering the user interaction with the virtual content without correcting the position of the hand (i.e. the system and method of Ravasz explicitly display the hand movement of the user as it is travelling in the augmented reality system, wherein when the user’s hand is outside of the snaping distance no correction is needed since the snap function is not enabled) (see Fig. 12-13, [0101-0104]).
However, Ravasz does not explicitly teach the user’s hand is within a threshold distance of the virtual content but instead uses a general direction indicating means of cone or cylinder coverage of the virtual content (i.e. Ravasz is silent with the term of a threshold distance between user’s hand and actual content, but rather indicate the concept indirectly with the snapping functions) (see Fig. 10-13).
Wada teaches the detection of the user’s hand is within a threshold distant of the virtual content (i.e. as seen in figures 1-3 embodiment Wada, the user’s hand interaction with the key target as seen in figure 3 which shows the holding concept, direct to a visual recognition of the hand having a threshold be detection that registers a correction of image from user hand being separate with the key object and the user holding the object direction which shows a direct distance base determinant algorithm) (see Fig. 1-3, [0055-0056]).
Therefore, it would have been obvious for one of ordinary skill in the art at the time of the accepted filing data of the current application to have used the distance based algorithm of Wada for user body part detection with respect to object in the overall system of Ravasz to further improve the user motion detection in complex user operation where different user motion is detected, in order to improvement system detection accuracy (see Wada, [0006]).
As to claim 15, Ravasz teaches a computing system (i.e. the interactive augmented reality computing system of Ravasz figure 1-13 embodiment) (see Fig. 1-13, [0100-0104]) comprising:
one or more processors (i.e. element 412) (see Fig. 4, [0070]);
a non-transitory memory (i.e. 414, 418) (see Fig. 4, [0070]);
an interface for communicating with a display device and one or more input devices (i.e. projection display 434 and the interface 432 of figure 4) (see Fig. 4, [0070]); and
one or more programs stored in the non-transitory memory, which, when executed by the one or more processors (i.e. the interactive augmented reality computing system Ravasz figure 1-13 embodiment uses electronic components) (see Fig. 1-13, [0070]), cause the computing system to:
detect a user interaction with virtual content (i.e. as seen in figure 13 emboidment the computer system detects user’s hand movement as well as the finger moving gesture toward the object 1308) (see Fig. 13, [0104]);
in accordance with a determination that a position of a hand is within a threshold of the virtual content, render the user interaction with the virtual content by correcting the position of the hand and rendering the user interaction with the virtual content based on the corrected position of the hand (i.e. as seen in figure 12-13 the user’s finger and hand movement when directed toward the virtual object 1308 is able to meet a threshold value a snap function is able to capture the virtual object that is closes to the direction of movement) (see Fig. 12-13, [0101-0104]); and and
in accordance with a determination that the position of the hand is outside of the threshold of the virtual content, render the user interaction with the virtual content without correcting the position of the hand (i.e. the system and method of Ravasz explicitly display the hand movement of the user as it is travelling in the augmented reality system, wherein when the user’s hand is outside of the snaping distance not correct is need since the snap function is not enabled) (see Fig. 12-13, [0101-0104]).
However, Ravasz do not explicitly teach the user’s hand is within a threshold distance of the virtual content but instead uses a general direction indicating means of cone or cylinder coverage of the virtual content (i.e. Ravasz is silent with the term of a threshold distance between user’s hand and actual content, but rather indicate the concept indirectly with the snapping functions) (see Fig. 10-13).
Wada teaches the detection of the user’s hand is within a threshold distant of the virtual content (i.e. as seen in figures 1-3 embodiment Wada, the user’s hand interaction with the key target as seen in figure 3 which shows the holding concept, direct to a visual recognition of the hand having a threshold be detection that registers a correction of image from user hand being separate with the key object and the user holding the object direction which shows a direct distance base determinant algorithm) (see Fig. 1-3, [0055-0056]).
Therefore, it would have been obvious for one of ordinary skill in the art at the time of the accepted filing data of the current application to have used the distance based algorithm of Wada for user body part detection with respect to object in the overall system of Ravasz to further improve the user motion detection in complex user operation where different user motion is detected, in order to improvement system detection accuracy (see Wada, [0006]).
As to claim 20, Ravasz teaches a non-transitory memory (i.e. 414, 418) (see Fig. 4, [0070]) storing one or more programs, which, when executed by one or more processors (i.e. element 412) (see Fig. 4, [0070]) of a computing system with an interface for communicating with a display device (i.e. projection display 434 of figure 4) (see Fig. 4, [0070]) and one or more input devices (i.e. the objection selection engine 436 and interface 432) (see Fig. 4, [0070]), cause the computing system (i.e. the interactive augmented reality computing system Ravasz figure 1-13 embodiment uses electronic components) (see Fig. 1-13, [0070]) to:
detect a user interaction with virtual content (i.e. as seen in figure 13 emboidment the computer system detects user’s hand movement as well as the finger moving gesture toward the object 1308) (see Fig. 13, [0104]);
in accordance with a determination that a position of a hand is within a threshold of the virtual content, render the user interaction with the virtual content by correcting the position of the hand and rendering the user interaction with the virtual content based on the corrected position of the hand (i.e. as seen in figure 12-13 the user’s finger and hand movement when directed toward the virtual object 1308 is able to meet a threshold value a snap function is able to capture the virtual object that is closes to the direction of movement) (see Fig. 12-13, [0101-0104]); and and
in accordance with a determination that the position of the hand is outside of the threshold of the virtual content, render the user interaction with the virtual content without correcting the position of the hand (i.e. the system and method of Ravasz explicitly display the hand movement of the user as it is travelling in the augmented reality system, wherein when the user’s hand is outside of the snaping distance not correct is need since the snap function is not enabled) (see Fig. 12-13, [0101-0104]).
However, Ravasz do not explicitly teach the user’s hand is within a threshold distance of the virtual content but instead uses a general direction indicating means of cone or cylinder coverage of the virtual content (i.e. Ravasz is silent with the term of a threshold distance between user’s hand and actual content, but rather indicate the concept indirectly with the snapping functions) (see Fig. 10-13).
Wada teaches the detection of the user’s hand is within a threshold distant of the virtual content (i.e. as seen in figures 1-3 embodiment Wada, the user’s hand interaction with the key target as seen in figure 3 which shows the holding concept, direct to a visual recognition of the hand having a threshold be detection that registers a correction of image from user hand being separate with the key object and the user holding the object direction which shows a direct distance base determinant algorithm) (see Fig. 1-3, [0055-0056]).
Therefore, it would have been obvious for one of ordinary skill in the art at the time of the accepted filing data of the current application to have used the distance based algorithm of Wada for user body part detection with respect to object in the overall system of Ravasz to further improve the user motion detection in complex user operation where different user motion is detected, in order to improvement system detection accuracy (see Wada, [0006]).
As to claim 2, Ravasz teaches the method of claim 1, wherein detecting the user interaction with the virtual content includes detecting an eye tracking input (i.e. as seen in figure 2A, 2B, the HMD system of Ravasz include the additional eye tracking system) (see Fig. 2, [0062-0063]).
As to claim 3, Ravasz teaches the method of claim 2, wherein, in accordance with a determination that the position of the hand is outside of the threshold distance of the virtual content, rendering the user interaction with the virtual content is based on the eye tracking input (i.e. as seen in figure 7 the detection system of Ravasz used the projection 706 with a dominant eye origin point 702 and a fingertip control point 704 which means the eye track input is used for input purpose) (see Fig. 7, [0086]).
As to claim 4, Ravasz teaches the method of claim 3, wherein, in accordance with a determination that the position of the hand is outside the threshold distance of the virtual content, rendering the user interaction with the virtual content is further based on changes in the position of the hand (i.e. when the snap function is not met as the user’s hand is outside the threshold distance as taught in Ravasz in view of Wada, the virtual content is still rendered based on the change of the user’s fingertip in the form of the ray projection input) (see [0101]).
As to claim 8, Ravasz teaches the method of claim 1, wherein detecting the user interaction with the virtual content includes detecting a gestural input including changes in the position of the hand (i.e. the system of Ravasz teaches detecting the user’s fingertip movement which is a gesture input included with the changes of the position of the hand (i.e. as seen in figure 7 the detection system of Ravasz used the projection 706 with a dominant eye origin point 702 and a fingertip control point 704 which means the eye track input is used for input) (Fig. 7, [0086]).
As to claim 9, Ravasz teaches the method of claim 8, wherein, in accordance with a determination that the position of the hand is within the threshold distance of the virtual content, rendering the user interaction with the virtual content is based on changes in the corrected position of the hand (i.e. as seen in figure 6-7 the system of Ravasz tracks the user’s hand as well as the fingertip which when display on the HMD device would at least include the user finger joint) (see Fig. 6-7, [0086-0087]).
As to claim 10, Ravasz teaches the method of claim 9, wherein, in accordance with a determination that the position of the hand is outside the threshold distance of the virtual content, rendering the user interaction with the virtual content is based on the changes in the position of the hand (i.e. when the snap function is not met as the user’s hand is outside the threshold distance as taught in Ravasz in view of Wada, the virtual content is still rendered based on the change of the user’s fingertip in the form of the ray projection input) (see [0101]).
As to claim 11, Ravasz teaches the method of claim 1, wherein the position of the hand includes one or more locations of one or more joints of the hand (i.e. as seen in figure 6-7 the system of Ravasz tracks the user’s hand as well as the fingertip which when display on the HMD device would at least include the user finger joint) (see Fig. 6-7, [0086-0087]).
As to claim 14, Ravasz teaches the method of claim 1, further comprising displaying, via the display device, the rendered user interaction with the virtual content (i.e. Ravasz teaches the HMD device which displays the image in figure 2A and 2B embodiments) (see Fig. 2A, 2B, [0052]).
As to claim 16, Ravasz teaches the computing system of claim 15, wherein the computing system detects the user interaction with the virtual content by detecting an eye tracking input and renders the user interaction with the virtual content based on the eye tracking input and changes in the position of the hand (i.e. as seen in figure 2A, 2B, the HMD system of Ravasz include the additional eye tracking system) (see Fig. 2, [0062-0063]).
Claim 5-7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ravasz et al. in view of Wada et al. as applied to claim 1 and 15 above, and further in view of Karmon et al. (US Pub: 2018/0307319 A1).
As to claim 5, Ravasz and Wada teaches the method of claim 1, but do not teach wherein detecting the user interaction with the virtual content includes a voice input (i.e. Ravasz teaches using user gesture but is silent with respect of voice input contents).
Karmon teaches wherein detecting the user interaction with the virtual content includes a voice input (i.e. Karmon teaches in the background of the invention the user interface for a computer can include voice interface) (see [0003]).
Therefore, it would have been obvious for one of ordinary skill in the art at the accepted filing date of the current application to have used the Karmon voice interface in the Ravasz Figure 4 interface system, in order to further expand the interface and reduce the invention into practice (see Karmon [0003] and Ravasz Fig. 4).
As to claim 6, Ravasz, Wada and Karmon teaches the method of claim 5, wherein, in accordance with a determination that the position of the hand is within the threshold distance of the virtual content, rendering the user interaction with the virtual content is based on the voice input (i.e. Karmon teaches in the background of the invention the user interface for a computer can include voice interface) (see [0003]) and the corrected position of the hand (i.e. as seen in figure 6-7 the system of Ravasz tracks the user’s hand as well as the fingertip which when display on the HMD device would at least include the user finger joint) (see Fig. 6-7, [0086-0087]).
As to claim 7, Ravasz, Wada and Karmon teaches the method of claim 5, wherein, in accordance with a determination that the position of the hand is outside the threshold distance of the virtual content, rendering the user interaction with the virtual content is based on the voice input (i.e. Karmon teaches in the background of the invention the user interface for a computer can include voice interface) (see [0003]).
As to claim 17, Ravasz, Wada and Karmon teaches the computing system of claim 15, wherein the computing system detects the user interaction with the virtual content by detecting a voice input and renders the user interaction with the virtual content based on the voice input (i.e. Karmon teaches in the background of the invention the user interface for a computer can include voice interface) (see [0003]) and the corrected position of the hand (i.e. as seen in figure 6-7 the system of Ravasz tracks the user’s hand as well as the fingertip which when display on the HMD device would at least include the user finger joint) (see Fig. 6-7, [0086-0087]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
The prior art Bar-Zeev et al. (US Pub: 2024/0202959 A1) is cited to teach another type of augmented display collaboration system figures 1-7 embodiments.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CALVIN C. MA whose telephone number is (571)270-1713. The examiner can normally be reached 8:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin C. Lee can be reached on 571-272-2963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CALVIN C MA/Primary Examiner, Art Unit 2629 January 10, 2026