Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 05/01/2025 has been entered.
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim limitations “display control unit (claim 1)(205, see Figs. 2, 7 , Par. 27)”, “first acquisition unit (claim 1) (206, see Fig. 2, 7, 8 , Par. 29)”, and “second acquisition unit (claim 1) (208, see Figs. 2, 7, Par. 31)”, control unit (201, see Fig. 2, [0024]) invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 7-10, 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Lacey et al. (US 20190362557 A1), hereinafter Lacey, in view of Connellan et al. (US 2019/0113966 A1), hereinafter Connellan .
Regarding Claim 1, Lacey teaches:
An information processing apparatus (See FIG. 2B: 200) comprising at least one memory (See paragraph [0101]) and at least one processor (FIG. 2B: 128) which function as a plurality of units (See paragraphs [0101], [0102] and FIG. 2B: the Examiner is interpreting the processors 128 and associated memory that control the various hardware components discussed below as corresponding to a respective unit) comprising:
(1) a display control unit (See paragraph [0100] and FIG. 2A: the memory and processor coupled to the display 220 function as a display control unit) configured to display a plurality of virtual objects (virtual objects 1210, 1220, 1230, 1242, and 1244) such that the virtual objects are arranged in a three-dimensional space (See paragraph [0177)) that isa visual field of a user (1250) (See FIG. 12B) (See paragraph [0172));
(2) a first acquisition unit (See paragraph [0086], [0194] and FIG. 2B: the memory and processor coupled to the camera 124 in the head mounted wearable component 58 and the IMU 102 in the hand held component 606 function as a first acquisition unit) configured to acquire information of a position of an operating body at a position of a hand of the user (See paragraph [0364], lines 15-19: the tracked controller or tracked hand targeting vector);
(3) a second acquisition unit (See paragraph [0109], lines 1-4 and FIG. 2B: the IMU in the head mounted wearable component 58 function as a second acquisition unit) configured to acquire information of a line-of-sight position of the user (See paragraph [0128], last four lines); and
(4) a control unit (See paragraphs [0101], [0102] and FIG. 2B: the processors 128in the head mounted wearable component 58 and the hand held component 606 function as a control unit) configured to (a) set a first operation mode in a case where a distance between the position of the operating body and the line-of-sight position (See paragraph [0082]: convergence between the head and hand results in a first operation mode) is smaller than a predetermined threshold (See paragraph [0084]; See paragraph [0364], lines 1-19), (b) set a second operation mode in a case where the distance between the position of the operating body and the line-of-sight position (See paragraph [0082]: divergence between the head and hand result in a second operation mode) is larger than the predetermined threshold (See paragraph [0373]; See paragraph [0414]).
Lacey does not explicitly teach:
(c) even when the second operation mode is set, switch to the first operation mode in a case where it is determined that the user is not in a predetermined posture in which the operating body is held in front of the user.
However,
Connellan discloses even when the second operation mode is set (e.g. 1520a in the FOV so the mode is set to a higher sensitivity) (Fig. 15, Pars. 146, 148, 153-154), switch to the first operation mode (e.g. 1520b not in the FOV so the mode is set to a lower sensitivity) in a case where it is determined that the user is not in a predetermined posture in which the operating body is held in front of the user (Fig. 15, Pars. 148, 153-154, see also Par. 155).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified Lacey with the teaching of Connellan to improve tracking in VR/AR system as suggested by Connellan (Par. 146).
Regarding Claim 9, see claim 1 rejection and motivation above. Lacey further teaches:
An information processing method (See the method depicted in FIG. 53, as well as the steps described below) comprising:
displaying a plurality of virtual objects (virtual objects 1210, 1220, 1230, 1242, and 1244) such that the virtual objects are arranged in a three-dimensional space (See paragraph [0177]) that is a visual field of a user (1250) (See FIG. 12B) (See paragraph [0172));
acquiring information of a position of an operating body at a position of a hand of the user (See paragraph [0364], lines 15-19: the tracked controller or tracked hand targeting vector);
acquiring information of a line-of-sight position of the user (See paragraph [0128], last four lines); and
setting a first operation mode in a case where a distance between the position of the operating body and the line-of-sight position (See paragraph [0082]: convergence between the head and hand results in a first operation mode) is smaller than a predetermined threshold (See paragraph [0084]; See paragraph [0364], lines 1-19);
setting a second operation mode in a case where the distance between the position of the operating body and the line-of-sight position (See paragraph [0082]: divergence between the head and hand result in a second operation mode) is larger than the predetermined threshold (See paragraph [0373]; See paragraph [0414]); and
and which is indicated by the item in the second operation mode (See paragraph [0428] and FIG. 53: an indicating a direction to which the operating body is directed (palm-to-fingertip ray cast) is determined, and the virtual object indicated by palm-to-fingertip raycast is selected in the second operation mode) (See also FIG. 46E, showing a selection of the virtual object in the second operation mode (Totem Projected Raycast /Hand Feature Point RayCast)).
Regarding Claim 10, see claim 1 rejection and motivation above. Lacey further teaches:
A non-transitory computer-readable storage medium that stores a program for causing a computer (See paragraph [0508)]) to execute an information processing method (See the method depicted in FIG. 53, as well as the steps described below), the method comprising:
displaying a plurality of virtual objects (virtual objects 1210, 1220, 1230, 1242, and 1244) such that the virtual objects are such that the virtual objects are arranged in a three-dimensional space (See paragraph [0177]) that is a visual field of a user (1250) (See FIG. 12B) (See paragraph [0172));
acquiring information of a position of an operating body at a position of a hand of the user (See paragraph [0364], lines 15-19: the tracked controller or tracked hand targeting vector);
acquiring information of a line-of-sight position of the user (See paragraph [0128], last four lines); and
setting a first operation mode in a case where a distance between the position of the operating body and the line-of-sight position (See paragraph [0082]: convergence between the head and hand results in a first operation mode) is smaller than a predetermined threshold (See paragraph [0084]; See paragraph [0364], lines 1-19);
setting a second operation mode in a case where the distance between the position of the operating body and the line-of-sight position (See paragraph [0082]: divergence between the head and hand result in a second operation mode) is larger than the predetermined threshold (See paragraph [0373]; See paragraph [0414]).
Regarding Claim 3, Lacey in view of Connellan teaches all of the elements of the claimed invention, as stated above. Furthermore, Lacey teaches:
The information processing apparatus according to claim 1, wherein the control unit sets the first operation mode in a case where a state where the distance between the position of the operating body and the line-of-sight position is smaller than the predetermined threshold continues fora predetermined time, and sets the second operation mode in a case where a state where the distance between the position of the operating body and the line-of-sight position is larger than the predetermined threshold continues for the predetermined time (See paragraph [0394]: transmodal input fusion techniques provide for dynamic coupling of sensor input, e.g., identifying sensor inputs that have converged temporally (e.g., for a fixation or dwell time); See paragraph [0415]).
Regarding Claim 7, Lacey in view of Connellan teaches all of the elements of the claimed invention, as stated above. Furthermore, Lacey teaches:
The information processing apparatus according to claim 1, wherein
wherein the operating body is the hand of the user (See FIG. 46C, showing examples of when the operating body is the hand of the user), and
the first acquisition unit acquires information of the position of the operating body by detecting the hand of the user from a captured image (FIG. 2B: from camera 124) of the three-dimensional space (See paragraph [0086)).
Regarding Claim 8, Lacey in view of Connellan teaches all of the elements of the claimed invention, as stated above. Furthermore, Lacey teaches:
The information processing apparatus according to claim 1, wherein
wherein the operating body isa controller (See FIG. 46C, showing examples of when the operating body isa controller), and
the first acquisition unit acquires information of a position of the controller acquired by a sensor (FIG. 2B: IMU 102) included in the controller as information of the position of the operating body (See paragraph [0194]).
Regarding Claim 12, Lacey in view of Connellan teaches all of the elements of the claimed invention, as stated above. Furthermore, Lacey teaches:
The information processing apparatus according to claim 1, wherein the first operation mode is a hand operation mode that enables the user to perform an operation on a virtual object by directly touching and selecting the virtual object with the user’s hand (See paragraph [0428] and FIG. 53: the first operation mode isa hand operation mode that enables the user to perform an operation on the first virtual object by directly touching and selecting the first virtual object with the user's hand using the convergence point 5301).
Regarding Claim 13, Lacey in view of Connellan teaches all of the elements of the claimed invention, as stated above. Furthermore, Lacey teaches:
The information processing apparatus according to claim 1, wherein the second operation mode is a ray operation mode that enables the user to perform an operation on a virtual object by pointing a distal end portion of a ray to and selecting the virtual object (See paragraph [0428] and FIG. 53: the second operation mode isa ray operation mode that enables the user to perform an operation on the second virtual object by pointing a distal end portion of a ray to and selecting the second virtual object using the convergence point 5307).
Claim 4-6 are rejected under 35 U.S.C. 103 as being unpatentable over Lacey in view of Connellan as applied to claim 1 above, and further in view of Tang et al. (US 20200226814 A1), hereinafter Tang.
Regarding Claim 4, Lacey in view of Connellan as combined above does not explicitly teach:
The information processing apparatus according to claim 1, wherein the control unit sets the second operation mode in a case where none of the plurality of the virtual objects are arranged within a predetermined range from the position of the operating body.
However, Tang teaches further:
A control unit (FIG. 7: 710) sets the second operation mode in a case where none of a plurality of virtual objects are arranged within a predetermined range from the position of the operating body (See paragraph [0055)]).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the information processing apparatus (as taught by Lacey in view of Connellan as combined above) so the control unit sets the second operation mode in a case where none of a plurality of virtual objects are arranged within a predetermined range from the position of the operating body (as taught by Tang). Doing so would allow the user to directly target and select the virtual object using a natural interaction methodology (See Tang, paragraph [0055]).
Regarding Claim 5, see claim 4 rejection and motivation above. Lacey in view of Connellan and Tang teaches all of the elements of the claimed invention, as stated above. Furthermore, Lacey in view of Connellan and Tang teaches:
The information processing apparatus according to claim 1, wherein the display item is an item of a beam shape that extends from the position of the operating body (See Tang, paragraph [0015], lines 1-4).
Regarding Claim 6, see claim 4 rejection and motivation above. Lacey in view of Connellan and Tang teaches all of the elements of the claimed invention, as stated above. Furthermore, Lacey in view of Connellan and Tang teaches:
The information processing apparatus according to claim 1, wherein the control unit performs control not to display the display item in a case where the operating body is not located in front of the user (See Tang, paragraph [0040] and FIG. 4B: since the targeting ray 470 is case when the operating body is located in the field of view of the head-mounted display 425, the targeting ray 470 is therefore controlled not to be displayed in a case where the operating body is not located in front of the user (e.g., at the user’s side, out of the field of view of the heard-mounted display 425)).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Lacey in view of Connellan as applied to claim 1 above, and further in view of Fan et al. (US 20220198756 A1; Previously cited in the PTO-892 dated 09/18/2024), hereinafter Fan.
Regarding Claim 11, Lacey in view of Connellan does not explicitly teach:
The information processing apparatus according to claim 1, wherein the predetermined threshold is determined based on at least any of (1) a size of a virtual object at the position of the operating body and (2) a distance between the virtual object and another virtual object.
However, in the same field of endeavor, systems for human interaction with an electronic device (Fan, paragraph [0001]), Fan teaches:
A predetermined threshold is determined based on at least any of (1) a size of a virtual object at the position of the operating body and (2) a distance between a virtual object and another virtual object (See FIG. 1A: the virtual objects 22, 22a) (See paragraph [0102]: the predetermined threshold is made smaller in a densely populated XR environment 20).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the information processing apparatus (as taught by Lacey in view of Connellan ) so the predetermined threshold is determined based on at least any of (1) a size of the first virtual object at the position of the operating body and (2) a distance between the first virtual object and another virtual object (as taught by Fan). Doing so would avoid accidentally selecting an unwanted object, in a densely populated XR environment (See Fan, paragraph [0102]).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 and 3-13 have been considered but are moot in view of the new ground(s) of rejection.
Examiner notes that the new claim elements are now addressed by reference Connellan as necessitated by amendments. Please see above for full basis of rejection as taught by Lacey in view of Connellan .
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 2019/0146222 A1 to Hiroi teaches a head-mounted display apparatus with a head movement detection unit configured to detect an amount of rotation angle of a head of a user, and a display controller configured to control a display mode of the pointer image.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JARURAT SUTEERAWONGSA whose telephone number is (571)270-7361. The examiner can normally be reached Monday thru Thursday, 8:30AM to 6:00PM, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lun Yi Lao can be reached at 571-272-7671. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JARURAT SUTEERAWONGSA/Examiner, Art Unit 2619
/LUNYI LAO/Supervisory Patent Examiner, Art Unit 2619