Prosecution Insights
Last updated: April 19, 2026
Application No. 19/107,590

INTERFACE DEVICE AND INTERFACE SYSTEM

Non-Final OA §103§112
Filed
Feb 28, 2025
Examiner
ADEDIRAN, ABDUL-SAMAD A
Art Unit
2621
Tech Center
2600 — Communications
Assignee
Mitsubishi Electric Corporation
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
92%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
481 granted / 617 resolved
+16.0% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
22 currently pending
Career history
639
Total Applications
across all art units

Statute-Specific Performance

§101
1.8%
-38.2% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
19.5%
-20.5% vs TC avg
§112
29.0%
-11.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 617 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, or 365(c) is acknowledged. In addition, acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 19/107,590, filed on February 28, 2025. Oath/Declaration Oath/Declaration as filed on February 28, 2025 is noted by the Examiner. Claim Objections Claim 1 is objected to because of the following informalities: In particular, the limitation “the three-dimensional position of the detection target detected by the detector is included”, in seventh line of the claim is indefinite, because it is unclear exactly how the three-dimensional position of the detection target detected by the detector is included in regard to the scope of the claim. Therefore, Examiner suggests the limitation term should be amended, without adding new matter, in a manner that clarifies exactly how the three-dimensional position of the detection target detected by the detector is included. In addition, any claim(s) dependent on claim 1 is objected to based on same above reasoning. Claim 26 is objected to because of the following informalities: The claim recites limitation “a display device” in third line of the claim, but the limitation is indefinite, because it is unclear as to whether the limitation is referring to a same “display device” recited in ninth line of claim 1, or to a different display device. Therefore, Examiner suggests the limitations should be amended, without adding new matter, in a manner that resolves the indefiniteness issue. Claim 29 is objected to because of the following informalities: In particular, the limitations “a projection mode” in second line of the claim renders the claim indefinite, because the meaning of the coined terms “a projection mode” recited in the second line of the claim is not apparent in light of the specification. See MPEP § 2173.05(a). Examiner recommends applicant amend the claim, without adding new matter, to positively recite in definite terms more clearly what “projection mode” actually is. Claim 30 is objected to because of the following informalities: In particular, claim recites limitations “one or more of the aerial images” in first thru second lines of the claim, but the limitations are unclear at least because the claim use terms “one or more of the aerial images” in plural form for a first time without previously reciting the term in the claim or in a claim from which the claim 30 depends. Therefore, Examiner suggests the limitations “one or more of the aerial images” should be amended, without adding new matter, in a manner that resolves the antecedent basis issue. Claim 31 is objected to because of the following informalities: In particular, the claim 31 has an extra white-spaces in third line of the claim. Thus, the Examiner suggests removing the extra white-space. Appropriate correction is required. Accordingly, any claim(s) dependent on claim 31 are objected to based on same above reasoning. Claim 32 is objected to because of the following informalities: In particular, claim recites limitation “two or more of the aerial images” and in fifth line of the claim, but the limitation is unclear at least because the claim use term “two or more of the aerial images” in plural form for a first time without previously reciting the term in the claim or in a claim from which the claim 32 depends. Therefore, Examiner suggests the limitation “two or more of the aerial images” should be amended, without adding new matter, in a manner that resolves the antecedent basis issue. Claim 35 is objected to because of the following informalities: In particular, claim recites limitations “the beam splitters” and “the retroreflection members” in third line of the claim, but the limitations are unclear at least because the claim use terms “the beam splitters” and “the retroreflection members” in plural form for a first time without previously reciting the term in the claim or in a claim from which the claim 35 depends. Therefore, Examiner suggests the limitations “the beam splitters” and “the retroreflection members” should be amended, without adding new matter, in a manner that resolves the antecedent basis issue. Claim 36 is objected to because of the following informalities: The claim recites limitation “an imaging optical system” in second line of the claim, but the limitation is indefinite, because it is unclear as to whether the limitation is referring to a same “the one or the plurality of imaging optical systems” recited in first thru second lines of claim 31, or to a different imaging optical system. Therefore, Examiner suggests the limitation should be amended, without adding new matter, in a manner that resolves the indefiniteness issue. Claim 39 is objected to because of the following informalities: In particular, claim recites limitations “the boundary plane” and “the plane” in second line of the claim, but the limitations are unclear at least because the claim use terms “the boundary plane” and “the plane” for a first time without previously reciting the terms in the claim or in a claim from which the claim 39 depends. Therefore, Examiner suggests the limitations “the boundary plane” and “the plane” should be amended, without adding new matter, in a manner that resolves the antecedent basis issue. Claim 41 is objected to because of the following informalities: In particular, claim recites limitations “the posture of each of the aerial images” and “the aerial images” in third thru fifth lines of the claim, but the limitations are unclear at least because the claim use terms “the posture of each of the aerial images” and “the aerial images” for a first time without previously reciting the terms in the claim or in a claim from which the claim 41 depends. Therefore, Examiner suggests the limitations “the posture of each of the aerial images” and “the aerial images” should be amended, without adding new matter, in a manner that resolves the antecedent basis issue. Accordingly, any claim(s) dependent on claim 41 are objected to based on same above reasoning. Claim 42 is objected to because of the following informalities: In particular, claim recites limitation “the aerial images” and in fourth line of the claim, but the limitation is unclear at least because the claims use term “the aerial images” in plural form for a first time without previously reciting the term in the claim or in a claim from which the claim 42 depends. Therefore, Examiner suggests the limitation “the aerial images” should be amended, without adding new matter, in a manner that resolves the antecedent basis issue. Claim 43 is objected to because of the following informalities: In particular, claim recites limitation “the respective operation spaces” in twelve thru thirteenth lines of the claim, but the limitation is unclear at least because the claim use term “the respective operation spaces” for a first time without previously reciting the term in the claim. Therefore, Examiner suggests the limitations “the respective operation spaces” should be amended, without adding new matter, in a manner that resolves the antecedent basis issue. Accordingly, any claim(s) dependent on claim 43 are objected to based on same above reasoning. Claims 45 and 46 are objected to because of the following informalities: In particular, in regard to claims 45 and 46, the phrase “can” recited in second line and third line respectively of the claims render the claims indefinite because it is unclear whether the limitation(s) following each phrase are part of the claimed invention. See MPEP § 2173.05(d). For the purposes of furthering examination, Examiner suggests the term “can” should be changed to reasonably acceptable language that positively recites limitation(s) that clearly define the scope of the claim. Claim 47 is objected to because of the following informalities: In particular, in regard to claim 47 the phrase “can be” recited in fourth line of the claim renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d). For the purposes of furthering examination, Examiner suggests the term “can be” should be changed to reasonably acceptable language that positively recites limitation(s) that clearly define the scope of the claim. Claim 48 is objected to because of the following informalities: In particular, the limitation “the three-dimensional position of the detection target detected by the detector is included”, in seventh thru eighth lines of the claim is indefinite, because it is unclear exactly how the three-dimensional position of the detection target detected by the detector is included in regard to the scope of the claim. Therefore, Examiner suggests the limitation term should be amended, without adding new matter, in a manner that clarifies exactly how the three-dimensional position of the detection target detected by the detector is included. In addition, any claim(s) dependent on claim 48 is objected to based on same above reasoning. Accordingly, any claim(s) dependent on claim 48 are objected to based on same above reasoning. In addition, the limitation “the virtual space is comprises”, in sixth line of the claim is indefinite, because it is unclear exactly what the virtual space is. Therefore, Examiner suggests the limitation term should be amended, without adding new matter, in a manner that clarifies exactly what the virtual space is or comprises. In addition, any claim(s) dependent on claim 48 is objected to based on same above reasoning. Accordingly, any claim(s) dependent on claim 48 are objected to based on same above reasoning. Claims 1, 43, 48, 50, and 51 are objected to because of the following informalities: In particular the terms(s) “a detector” recited in second line, third line, second line, second line, and second line of claims 1, 43, 48, 50, and 51 respectively, “an acquisitor” recited in fourth lines of claims 50 and 51, “a determiner to” recited in eighth lines of claims 50 and 51, and “an operation information outputter” recited in twelfth lines of claims 50 and 51 were not disclosed and supported in the specification. In addition, the limitations were not disclosed in the original set of claims at the time of filing. Therefore, the terms “a detector”, “an acquisitor”, “a determiner”, and “outputter” constitute new matter. See MPEP 201.07 Accordingly, any claim(s) dependent on claims 1, 43, or 48 are objected to based on same above reasoning. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 26, and 38 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In particular, claim 1 recites limitations “the display information” and “the display device”, and “the movement of the detection target” in ninth thru tenth lines of the claim, but the limitations are unclear at least because there is insufficient antecedent basis for the above limitations in the claim given that the claim uses terms “the display information”, “the display device”, and “the movement of the detection target” for a first time without previously reciting the terms in the claim which even further creates lack of clarity in regard to exactly what the display information and the display device are actually referring to. Therefore, Examiner suggests the limitations should be amended, without adding new matter, in a manner that resolves the antecedent basis issues. Accordingly, any claims dependent on claim 1 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, based on same above reasoning. Claim 26 recites limitation “the display information” in second line of the claim, but the limitation is unclear at least because there is insufficient antecedent basis for the above limitation in the claim given that the claim uses term “the display information” for a first time without previously reciting the term in the claim, or in a claim from which the claim 26 depends which even further creates lack of clarity in regard to exactly what the display information is actually referring to. Therefore, Examiner suggests the limitations should be amended, without adding new matter, in a manner that resolves the antecedent basis issue. Claim 38 recites limitation “the video information” in third line of the claim, but the limitation is unclear at least because there is insufficient antecedent basis for the above limitation in the claim given that the claim uses term “the video information” for a first time without previously reciting the term in the claim, or in a claim from which the claim 38 depends which even further creates lack of clarity in regard to exactly what the video information is actually referring to. Therefore, Examiner suggests the limitations should be amended, without adding new matter, in a manner that resolves the antecedent basis issue. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 26, 28-30, 45, 48, and 52 are rejected under 35 U.S.C. 103 as being unpatentable over Kaede, U.S. Patent Application Publication 2021/0200321 A1 (hereinafter Kaede), in view of Hara et al., U.S. Patent Application Publication 2022/0308695 A1 (hereinafter Hara). Regarding claim 1, Kaede teaches an interface device comprising: a detector to detect a three-dimensional position of a detection target in a virtual space (1, 30 FIG. 1, paragraph[0041] of Kaede teaches the information processing system 1 illustrated in FIG. 1 includes a floating image forming device 10, a control device 20 that controls the floating image forming device 10, and a sensor 30 that detects the position of an object, such as a hand, a finger, a pen, or an electronic device, in the space where the floating image is being displayed (i.e., Kaede teaches an information processing system that includes a floating image forming device and a sensor that detects a position of an object (e.g., Kaede teaches a hand, finger, pen, or electronic device) in space where the floating image is being displayed)); and a projector to project an aerial image onto the virtual space (10; #1 and #2, #2a, and #2b FIGS. 1, 3A-4, 7-8, and 18, paragraph[0048] of Kaede teaches the floating image forming device 10 is a device that directly presents the floating image in midair; various methods of directly forming the floating image #1 in midair have already been proposed, and some have been put into practical use; for example, such methods of forming the floating image include a method of using a half-mirror, a method of using a beam splitter, a method of using a micromirror array, a method of using a microlens array, a method of using a parallax barrier, and a method of using plasma emission; and the method by which the floating image forming device 10 presents the floating image in midair may be any of the methods listed here or some other method that becomes usable in the future, and See also at least paragraphs[0046]-[0047], [0054]-[0055], [0065]-[0081], [0156]-[0163], [0185]-[0191], and [0201] of Kaede (e.g., Kaede teaches a floating image forming device that directly presents at least one floating image, which has an outer border viewable to a user, in midair)), wherein the virtual space comprises a plurality of operation spaces for each operation executable by a user when the three-dimensional position of the detection target detected by the detector is included and the operation being defined in each operation space among the plurality of operation spaces (11A to 11G, PERSON A thru PERSON D, CREATE PDF FILE, PC1-PC2, SMARTPHONE, TABLET FIGS. 1, 3A-4, 7-8, 11A-11B, 12-13, 18, and 23-26, paragraphs[0121]-[0122] of Kaede teaches FIG. 12 is a diagram illustrating a specific example of a floating image included in the detection region A, and explaining a case in which a specific sub-region is used to direct a process; FIG. 13 is a diagram illustrating a specific example of a floating image #2a presented on the near side of the detection region A; in FIG. 12, a screen like one displayed on a PC or a smartphone is displayed as the floating image #1; on the screen, seven buttons 11A to 11G are arranged; and the button 11A is a “Back” button, the button 11B is a button displaying “Info”, the button 11C is a button that creates a “New” file, the button 11D is an “Open” button, the button 11E is a “Save” button, the button 11F is a “Save as” button, and the button 11G is a “Print” button, and See also at least paragraphs[0054]-[0055], [0065]-[0081], [0120], [0123]-[0128], [0156]-[0163], [0185]-[0191], [0201], [0204-[0205], [0208], and [0221]-[0222] of Kaede (i.e., Kaede teaches at least one detection region that includes the floating image(s) capable of having buttons, divided sub-regions, and folder icons each with boundaries and each having different operational functions responsive to a touch, wherein the sub regions correspond to surfaces of the floating image(s))), a predetermined pointer; operation to the display information of the display device in conjunction with the movement of the detection target in the operation space being defined in at least any one of the plurality of the operation spaces (FIGS. 1, 3A-4,7-8, 11A-11B, 12-13, 18, and 23-26, paragraphs[0208] of Kaede teaches in the example of FIG. 26, the sales file is surrounded by a thick line to indicate that the sales file has been selected; and the selection of a file may be performed by a gesture in midair (such as by moving the fingertip to pass through the region where the sales file is being displayed, for example), or by performing an operation with a device not illustrated, such as an operation performed on a touch panel or an operation performed using a mouse or a trackpad, and See also at least paragraphs[0054]-[0055], [0065]-[0081], [0120]-[0128], [0156]-[0163], [0185]-[0191], [0201], [0204]-[0207], [0209]-[0212], [0221]-[0222] of Kaede (i.e., Kaede teaches a sales file icon being surrounded by a thick line, responsive to user selection via a midair gesture or a touch panel, to indicate the sales file has been selected)), and a boundary position of each of the operation spaces in the virtual space is indicated by the aerial image projected by the projector (FIGS. 1, 3A-4, 7-8, 11A-11B, 12-13, 18, and 23-26, paragraphs[0121]-[0122] of Kaede teaches FIG. 12 is a diagram illustrating a specific example of a floating image included in the detection region A, and explaining a case in which a specific sub-region is used to direct a process; FIG. 13 is a diagram illustrating a specific example of a floating image #2a presented on the near side of the detection region A; in FIG. 12, a screen like one displayed on a PC or a smartphone is displayed as the floating image #1; on the screen, seven buttons 11A to 11G are arranged; and the button 11A is a “Back” button, the button 11B is a button displaying “Info”, the button 11C is a button that creates a “New” file, the button 11D is an “Open” button, the button 11E is a “Save” button, the button 11F is a “Save as” button, and the button 11G is a “Print” button, and See also at least paragraphs[0054]-[0055], [0065]-[0081], [0120], [0123]-[0128], [0156]-[0163], [0185]-[0191], [0201], [0204-[0205], [0208], and [0221]-[0222] of Kaede (i.e., Kaede teaches a detection region that includes the floating image(s) capable of having buttons, divided sub-regions, and folder icons each with boundaries and each having different operational functions responsive to a touch, wherein the sub regions correspond to surfaces of the floating image(s))); but does not expressly teach movement. However, Hara teaches movement (FIGS. 6A-6B, paragraphs[0121]-[0122] of Hara teaches in FIG. 6A, the pointer image 4a has a circular shape; the processor 60 changes the size of the circular shape of the pointer image 4a in such a manner as to be reduced to converge on the position of a finger 51 (substantially at a center of the spatial projected image 4a in FIG. 6A) as the finger 51 enters deeper into the detection target region 41; other images of icons such as a polygonal shape, an arrow, and the like can be used as the pointer image 4a1; additionally, the shape of the pointer image 4a1 is not limited to a closed figure such as the circular pointer image 4a1, and hence, a partially opened figure (for example, a broken line, a chain line) may be used for the pointer image 4a1; and the operator can determine that the input operation is recognized by the spatial projection apparatus 100 by visually confirming that the pointer image 4a1 has reduced its size, and See also at least paragraphs[0043], and [0045]-[0046] of Hara (i.e., Hara teaches a pointer image that converges on a position of a finger)). Furthermore, Kaede and Hara are considered to be analogous art because they are from the same field of endeavor with respect to a detection device, and involve the same problem of forming the detection device for suitably detecting a position of a finger. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Kaede based on Hara to have pointer movement operation. One reason for the modification as taught by Hara is so the operator can suitably determine that an input operation is recognized by a spatial projection apparatus, by visually confirming that a pointer has reduced its size (paragraph[0044] of Hara). The same motivation and rationale to combine for claim 1 mentioned above, in light of corresponding statement of grounds of rejection, applies to each respective dependent claim mentioned in the corresponding statement of grounds of rejection. Regarding claim 26, Kaede and Hara teach the interface device according to claim 1, wherein the operation includes at least any one of an operation to perform movement operation of a predetermined pointer to the display information of a display device, execution operation of a predetermined command or operation performed by using a mouse or a touch panel (FIGS. 1, 3A-4,7-8, 11A-11B, 12-13, 18, and 23-26, paragraphs[0208] of Kaede teaches in the example of FIG. 26, the sales file is surrounded by a thick line to indicate that the sales file has been selected; and the selection of a file may be performed by a gesture in midair (such as by moving the fingertip to pass through the region where the sales file is being displayed, for example), or by performing an operation with a device not illustrated, such as an operation performed on a touch panel or an operation performed using a mouse or a trackpad, and See also at least paragraphs[0054]-[0055], [0065]-[0081], [0120]-[0128], [0156]-[0163], [0185]-[0191], [0201], [0204]-[0207], [0209]-[0212], [0221]-[0222] of Kaede (i.e., Kaede teaches a sales file icon being surrounded by a thick line, responsive to user selection via a midair gesture or a touch panel, to indicate the sales file has been selected)). Regarding claim 28, Kaede and Hara teach the interface device according to claim 1, wherein the projector forms the aerial image in the virtual space, to cause the aerial image to include an angle of view of the detector (FIG. 1, paragraph[0054] of Kaede teaches the detection region A of a sensor 30A is provided in correspondence with the floating image #1 presented in midair by the floating image forming device 10; in the exemplary embodiment, the detection region A is set to include the region where the floating image is displayed and the periphery of the region; specifically, in the case where the floating image #1 is treated as a reference surface, the detection region A is set as a region substantially parallel to the reference surface and having a predetermined width in the normal direction of the reference surface; more specifically, the detection region A is set as a region having a width from several millimeters to approximately one centimeter for example in the direction of each of the space on the near side and the space on the far side of the floating image #1 from the perspective of the user; note that it is not necessary for the detection region A to be centered on the floating image in the thickness direction; and the detection region A herein is one example of a first region, and See also at least paragraphs[0041], and [0055] of Kaede (e.g., Kaede teaches a hand, finger, pen, or electronic device) in space where the floating image is being displayed, wherein at least sensor is provided in correspondence with the floating image and has detection region in a parallel direction along a surface of the floating image)). Regarding claim 29, Kaede and Hara teach the interface device according to claim 1, wherein the projector changes a projection mode of the aerial image projected onto the virtual space, depending on at least one of an operation space in which the three-dimensional position of the detection target detected by the detector is included, or movement of the detection target in the operation space in which the three-dimensional position of the detection target is included (FIGS. 1, 3A-4,7-8, 11A-11B, 12-13, 18, and 23-26, paragraphs[0208] of Kaede teaches in the example of FIG. 26, the sales file is surrounded by a thick line to indicate that the sales file has been selected; and the selection of a file may be performed by a gesture in midair (such as by moving the fingertip to pass through the region where the sales file is being displayed, for example), or by performing an operation with a device not illustrated, such as an operation performed on a touch panel or an operation performed using a mouse or a trackpad, and See also at least paragraphs[0054]-[0055], [0065]-[0081], [0120]-[0128], [0156]-[0163], [0185]-[0191], [0201], [0204]-[0207], [0209]-[0212], [0221]-[0222] of Kaede (i.e., Kaede teaches a sales file icon being surrounded by a thick line, responsive to user selection via a midair gesture or a touch panel, to indicate the sales file has been selected)). Regarding claim 30, Kaede and Hara teach the interface device according to claim 1, wherein one or more of the aerial images are projected onto the virtual space, and at least one of the aerial images present one of an outer frame or an outer surface of the virtual space to the user space (#1 FIGS. 1, 3A-4, 7-8, and 18, paragraph[0048] of Kaede teaches the floating image forming device 10 is a device that directly presents the floating image in midair; various methods of directly forming the floating image #1 in midair have already been proposed, and some have been put into practical use; for example, such methods of forming the floating image include a method of using a half-mirror, a method of using a beam splitter, a method of using a micromirror array, a method of using a microlens array, a method of using a parallax barrier, and a method of using plasma emission; and the method by which the floating image forming device 10 presents the floating image in midair may be any of the methods listed here or some other method that becomes usable in the future, and See also at least paragraphs[0046]-[0047], [0054]-[0055], [0065]-[0081], [0156]-[0163], [0185]-[0191], and [0201] of Kaede (e.g., Kaede teaches a floating image forming device that directly presents at least one floating image, which as shown in a least FIG. 1 has an outer border viewable to the user, in midair)). Regarding claim 45, Kaede and Hara teach the interface device according to claim 1, wherein the projector can project the aerial image as indicating only a boundary position of the operation space (FIGS. 1, 3A-4, 7-8, and 18, paragraph[0048] of Kaede teaches the floating image forming device 10 is a device that directly presents the floating image in midair; various methods of directly forming the floating image #1 in midair have already been proposed, and some have been put into practical use; for example, such methods of forming the floating image include a method of using a half-mirror, a method of using a beam splitter, a method of using a micromirror array, a method of using a microlens array, a method of using a parallax barrier, and a method of using plasma emission; and the method by which the floating image forming device 10 presents the floating image in midair may be any of the methods listed here or some other method that becomes usable in the future, and See also at least paragraphs[0046]-[0047], [0054]-[0055], [0065]-[0081], [0156]-[0163], [0185]-[0191], and [0201] of Kaede (e.g., Kaede teaches a floating image forming device that directly presents at least one floating image, which as shown in a least FIG. 1 has an outer border viewable to the user, in midair)). Regarding claim 48, Kaede teaches an interface system comprising: a detector to detect a three-dimensional position of a detection target in a virtual space (1, 30 FIG. 1, paragraph[0041] of Kaede teaches the information processing system 1 illustrated in FIG. 1 includes a floating image forming device 10, a control device 20 that controls the floating image forming device 10, and a sensor 30 that detects the position of an object, such as a hand, a finger, a pen, or an electronic device, in the space where the floating image is being displayed (i.e., Kaede teaches an information processing system that includes a floating image forming device and a sensor that detects a position of an object (e.g., Kaede teaches a hand, finger, pen, or electronic device) in space where the floating image is being displayed)); a projector to project an aerial image onto the virtual space (10; #1 and #2, #2a, and #2b FIGS. 1, 3A-4, 7-8, and 18, paragraph[0048] of Kaede teaches the floating image forming device 10 is a device that directly presents the floating image in midair; various methods of directly forming the floating image #1 in midair have already been proposed, and some have been put into practical use; for example, such methods of forming the floating image include a method of using a half-mirror, a method of using a beam splitter, a method of using a micromirror array, a method of using a microlens array, a method of using a parallax barrier, and a method of using plasma emission; and the method by which the floating image forming device 10 presents the floating image in midair may be any of the methods listed here or some other method that becomes usable in the future, and See also at least paragraphs[0046]-[0047], [0054]-[0055], [0065]-[0081], [0156]-[0163], [0185]-[0191], and [0201] of Kaede (e.g., Kaede teaches a floating image forming device that directly presents at least one floating image, which has an outer border viewable to a user, in midair)); and a display device to display (FIGS. 1, 3A-4, 7-8, and 18, paragraph[0078] of Kaede teaches in the case where a file treated as the process target of the first process and a file treated as the process target of the second process are the same, it may be beneficial to present an image indicating the file treated as the process target of the first process and the second process as the floating image #1 corresponding to the detection region A; specifically, when a process of selecting a document file named “Patent_Law.docx” is assigned to the first process, and a process of opening the document file named “Patent_Law.docx” is assigned to the second process, it may be beneficial to cause a rectangle with the file name “Patent_Law.docx” or a preview of “Patent_Law.docx” to be displayed as the floating image #1 corresponding to the detection region A, for example; in other words, if the user moves his or her finger to touch the surface of the floating image #1 representing “Patent_Law.docx”, “Patent_Law.docx” is selected by changing the state of the user's fingertip from FIG. 3A to FIG. 3B and then from FIG. 3B to FIG. 3A within a fixed amount of time; to indicate that the file has been selected, the color of the floating image #1 representing “Patent_Law.docx” is changed, for example; on the other hand, if the user moves his or her finger to pass through the surface of the floating image #1 representing “Patent_Law.docx”, “Patent_Law.docx” is opened by changing the state of the user's fingertip from FIG. 3A to FIG. 3B and then from FIG. 3B to FIG. 3C within a fixed amount of time; and note that the contents of the opened “Patent_Law.docx” may be presented as a floating image by the floating image forming device 10, or may be displayed on a display (not illustrated) capable of communicating with the processor 21); , wherein the virtual space is comprises a plurality of operation spaces for each operation executable by a user when the three-dimensional position of the detection target detected by the detector is included, the operation being defined in each of the operation spaces (11A to 11G, PERSON A thru PERSON D, CREATE PDF FILE, PC1-PC2, SMARTPHONE, TABLET FIGS. 1, 3A-4, 7-8, 11A-11B, 12-13, 18, and 23-26, paragraphs[0121]-[0122] of Kaede teaches FIG. 12 is a diagram illustrating a specific example of a floating image included in the detection region A, and explaining a case in which a specific sub-region is used to direct a process; FIG. 13 is a diagram illustrating a specific example of a floating image #2a presented on the near side of the detection region A; in FIG. 12, a screen like one displayed on a PC or a smartphone is displayed as the floating image #1; on the screen, seven buttons 11A to 11G are arranged; and the button 11A is a “Back” button, the button 11B is a button displaying “Info”, the button 11C is a button that creates a “New” file, the button 11D is an “Open” button, the button 11E is a “Save” button, the button 11F is a “Save as” button, and the button 11G is a “Print” button, and See also at least paragraphs[0054]-[0055], [0065]-[0081], [0120], [0123]-[0128], [0156]-[0163], [0185]-[0191], [0201], [0204-[0205], [0208], and [0221]-[0222] of Kaede (i.e., Kaede teaches at least one detection region that includes the floating image(s) capable of having buttons, divided sub-regions, and folder icons each with boundaries and each having different operational functions responsive to a touch, wherein the sub regions correspond to surfaces of the floating image(s))), a boundary position of each of the operation spaces is indicated by the aerial image projected by the projector, and the aerial image projected by the projector is visually recognizable by the user, together with (FIGS. 1, 3A-4, 7-8, 11A-11B, 12-13, 18, and 23-26, paragraphs[0121]-[0122] of Kaede teaches FIG. 12 is a diagram illustrating a specific example of a floating image included in the detection region A, and explaining a case in which a specific sub-region is used to direct a process; FIG. 13 is a diagram illustrating a specific example of a floating image #2a presented on the near side of the detection region A; in FIG. 12, a screen like one displayed on a PC or a smartphone is displayed as the floating image #1; on the screen, seven buttons 11A to 11G are arranged; and the button 11A is a “Back” button, the button 11B is a button displaying “Info”, the button 11C is a button that creates a “New” file, the button 11D is an “Open” button, the button 11E is a “Save” button, the button 11F is a “Save as” button, and the button 11G is a “Print” button, and See also at least paragraphs[0054]-[0055], [0065]-[0081], [0120], [0123]-[0128], [0156]-[0163], [0185]-[0191], [0201], [0204-[0205], [0208], and [0221]-[0222] of Kaede (i.e., Kaede teaches a detection region that includes the floating image(s) capable of having buttons, divided sub-regions, and user identifiable folder icons each with boundaries and each having different operational functions responsive to a touch, wherein the sub regions correspond to surfaces of the floating image(s))); but does not expressly teach video information; the video information displayed on the display device. However, Hara teaches video information; the video information displayed on the display device (FIG. 1, paragraphs[0025] of Hara teaches the processor 60 can be made up of a personal computer, a smartphone, PDA, and the like; the processor 60 can transmit image data (including a video image and a still image) and audio data which are stored in an internal or external storage unit (not shown) to the projector 10; the processor 60 may transmit audio data to the speaker 81 which is provided outside the projector 10; for example, an external storage device, which is connected by way of LAN or WAN, can be used as the external storage unit; and the processor 60 detects an input from the LiDAR sensor 70 and controls the operations of the projector 10 and the output unit 80, and See also at least paragraphs[0016] and [0018] of Hara (i.e., Hara teaches projecting image data onto an image forming member, wherein the image data includes a video image)). Furthermore, Kaede and Hara are considered to be analogous art because they are from the same field of endeavor with respect to a detection device, and involve the same problem of forming the detection device for suitably detecting a position of a finger. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Kaede based on Hara to have video information; the video information displayed on the display device. One reason for the modification as taught by Hara is so the operator can suitably determine that an input operation is recognized by a spatial projection apparatus, by visually confirming that a pointer has reduced its size (paragraph[0044] of Hara). Regarding claim 52, Kaede and Hara teaches the interface system according to claim 48, wherein in at least any one of the plurality of the operation spaces, a predetermined pointer; to display information of the display device in conjunction with movement of the detection target in the operation space is defined (FIGS. 1, 3A-4,7-8, 11A-11B, 12-13, 18, and 23-26, paragraphs[0208] of Kaede teaches in the example of FIG. 26, the sales file is surrounded by a thick line to indicate that the sales file has been selected; and the selection of a file may be performed by a gesture in midair (such as by moving the fingertip to pass through the region where the sales file is being displayed, for example), or by performing an operation with a device not illustrated, such as an operation performed on a touch panel or an operation performed using a mouse or a trackpad, and See also at least paragraphs[0054]-[0055], [0065]-[0081], [0120]-[0128], [0156]-[0163], [0185]-[0191], [0201], [0204]-[0207], [0209]-[0212], [0221]-[0222] of Kaede (i.e., Kaede teaches a sales file icon being surrounded by a thick line, responsive to user selection via a midair gesture or a touch panel, to indicate the sales file has been selected)); but does not expressly teach movement. However, Hara teaches movement (FIGS. 6A-6B, paragraphs[0121]-[0122] of Hara teaches in FIG. 6A, the pointer image 4a has a circular shape; the processor 60 changes the size of the circular shape of the pointer image 4a in such a manner as to be reduced to converge on the position of a finger 51 (substantially at a center of the spatial projected image 4a in FIG. 6A) as the finger 51 enters deeper into the detection target region 41; other images of icons such as a polygonal shape, an arrow, and the like can be used as the pointer image 4a1; additionally, the shape of the pointer image 4a1 is not limited to a closed figure such as the circular pointer image 4a1, and hence, a partially opened figure (for example, a broken line, a chain line) may be used for the pointer image 4a1; and the operator can determine that the input operation is recognized by the spatial projection apparatus 100 by visually confirming that the pointer image 4a1 has reduced its size, and See also at least paragraphs[0043], and [0045]-[0046] of Hara (i.e., Hara teaches a pointer image that converges on a position of a finger)). Claims 31, and 36-37 are rejected under 35 U.S.C. 103 as being unpatentable over Kaede, in view of Hara, and Maekawa et al., U.S. Patent Application Publication 2010/0110384 A1 (hereinafter Maekawa). Regarding claim 31, Kaede and Hara teach the interface device according to claim 1, but do not expressly teach wherein the projector, is one or a plurality of imaging optical systems forming light emitted by the light source as a real image, the real image by the light source formed as the aerial image. However, Maekawa teaches wherein the projector, is one or a plurality of imaging optical systems forming light emitted by the light source as a real image, the real image by the light source formed as the aerial image (P, 3, and 2 FIGS. 1-5, 7-8, and 11, paragraph[0054] of Maekawa teaches hereafter the process of imaging by a dihedral corner reflector array 3 of the present embodiment shall be explained together with the light path of lights emitted from the object to be projected O; as shown in a plane drawing in FIG. 4 and in a side view drawing in FIG. 5, light (in the direction of the arrow, indicated by solid line; in FIG. 4, in a three-dimensional sense passing from the space behind the paper to the space above it) emitted from the object to be projected O (indicated by a point in the drawings) as passing through the hole 32 prepared in the substrate 3 for the dihedral corner reflector array 3, shall be reflected once from one specular surface 21 (or 22) forming the dihedral corner reflector 2, then again reflected (reflected light paths indicated by broken lines) from the other specular surface 22 (or 21), therefore in a planar symmetric position to the object to be projected O with respect to the optical device plane 3S of the dihedral corner reflector array 3, will form the real image P of the object to be projected O; the real image P as shown in FIG. 6 will be observable from oblique directions (the direction of the arrow in the drawing) with respect to the substrate 31, in such positions where the specular surfaces 21 and 22 of the dihedral corner reflectors 2 of the dihedral corner reflector array 3 are visible; more specifically, as light is reflected by two mutually perpendicular specular surfaces 21 and 22, among the components of the light direction, the component that is parallel to the surface of substrate 31 (in other words, the component that is parallel to the optical device plane 3S) shall return in the direction from where it came, whereas the component that is parallel to the surface of the specular surfaces 21 and 22 shall be preserved as it was; as a result, light passing through the dihedral corner reflector array 3 with two reflections shall always pass through a point in a planar symmetric position with respect to the optical device plane 3S; and therefore as light is being emitted in every direction from the object to be projected O as a light source, insofar as those light rays are reflected twice by the dihedral corner reflectors 2 while passing through the dihedral corner reflector array 3, all of them will converge in the same point making it a focus point, and See also at least paragraphs[0021], [0047]-[0053], [0055]-[0062], [0065], and [0068] of Maekawa (i.e., Maekawa teaches a system that includes the object emitting light that passes through a dihedral corner reflector array that reflects the light twice as well as bends the light, via an optical device plane, making a focus point to form a recognizable real mirror image that will appear floating, at same viewing angle of the object, and projected at symmetric position from object, with respect to an optical device plane (i.e., a substrate) of the dihedral corner reflector array)). Furthermore, Kaede, Hara, and Maekawa are considered to be analogous art because they are from the same field of endeavor with respect to a detection device, and involve the same problem of forming the detection device for suitably detecting a position of a finger. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Kaede based on Hara and Maekawa wherein the projector, is one or a plurality of imaging optical systems forming light emitted by the light source as a real image, the real image by the light source formed as the aerial image. One reason for the modification as taught by Hara is so the operator can suitably determine that an input operation is recognized by a spatial projection apparatus, by visually confirming that a pointer has reduced its size (paragraph[0044] of Hara). Another reason for the modification as taught by Maekawa is to have an optical device that allows suitable interaction between floating real images and users (paragraph[0001] of Maekawa). The same motivation and rationale to combine for claim 31 mentioned above, in light of corresponding statement of grounds of rejection, applies to each respective dependent claim mentioned in the corresponding statement of grounds of rejection. Regarding claim 36, Kaede, Hara, and Maekawa teach the interface device according to claim 31, wherein an imaging optical system has a light beam bending plane that forms one plane in which an optical path of light emitted from a light source is bent, the imaging optical system forming a real image with the light source disposed on one surface side of the light beam bending plane, the real image being formed as the aerial image on an opposite surface side of the light beam bending plane (3S, P FIGS. 1-5, 7-8, and 11, paragraph[0054] of Maekawa teaches hereafter the process of imaging by a dihedral corner reflector array 3 of the present embodiment shall be explained together with the light path of lights emitted from the object to be projected O; as shown in a plane drawing in FIG. 4 and in a side view drawing in FIG. 5, light (in the direction of the arrow, indicated by solid line; in FIG. 4, in a three-dimensional sense passing from the space behind the paper to the space above it) emitted from the object to be projected O (indicated by a point in the drawings) as passing through the hole 32 prepared in the substrate 3 for the dihedral corner reflector array 3, shall be reflected once from one specular surface 21 (or 22) forming the dihedral corner reflector 2, then again reflected (reflected light paths indicated by broken lines) from the other specular surface 22 (or 21), therefore in a planar symmetric position to the object to be projected O with respect to the optical device plane 3S of the dihedral corner reflector array 3, will form the real image P of the object to be projected O; the real image P as shown in FIG. 6 will be observable from oblique directions (the direction of the arrow in the drawing) with respect to the substrate 31, in such positions where the specular surfaces 21 and 22 of the dihedral corner reflectors 2 of the dihedral corner reflector array 3 are visible; more specifically, as light is reflected by two mutually perpendicular specular surfaces 21 and 22, among the components of the light direction, the component that is parallel to the surface of substrate 31 (in other words, the component that is parallel to the optical device plane 3S) shall return in the direction from where it came, whereas the component that is parallel to the surface of the specular surfaces 21 and 22 shall be preserved as it was; as a result, light passing through the dihedral corner reflector array 3 with two reflections shall always pass through a point in a planar symmetric position with respect to the optical device plane 3S; and therefore as light is being emitted in every direction from the object to be projected O as a light source, insofar as those light rays are reflected twice by the dihedral corner reflectors 2 while passing through the dihedral corner reflector array 3, all of them will converge in the same point making it a focus point, and See also at least paragraphs[0021], [0047]-[0053], [0055]-[0062], [0065], and [0068] of Maekawa (i.e., Maekawa teaches a system that includes the object emitting light that passes through a dihedral corner reflector array that reflects the light twice as well as bends the light, via an optical device plane, making a focus point to form a recognizable real mirror image that will appear floating, at same viewing angle of the object, and projected at symmetric position from object, with respect to an optical device plane (i.e., a substrate) of the dihedral corner reflector array)). Regarding claim 37, Kaede, Hara, and Maekawa teach the interface device according to claim 31, wherein an imaging optical system has a light beam bending plane that forms one plane in which an optical path of light emitted from a light source is bent (FIGS. 1-5, 7-8, and 11, paragraph[0054] of Maekawa teaches hereafter the process of imaging by a dihedral corner reflector array 3 of the present embodiment shall be explained together with the light path of lights emitted from the object to be projected O; as shown in a plane drawing in FIG. 4 and in a side view drawing in FIG. 5, light (in the direction of the arrow, indicated by solid line; in FIG. 4, in a three-dimensional sense passing from the space behind the paper to the space above it) emitted from the object to be projected O (indicated by a point in the drawings) as passing through the hole 32 prepared in the substrate 3 for the dihedral corner reflector array 3, shall be reflected once from one specular surface 21 (or 22) forming the dihedral corner reflector 2, then again reflected (reflected light paths indicated by broken lines) from the other specular surface 22 (or 21), therefore in a planar symmetric position to the object to be projected O with respect to the optical device plane 3S of the dihedral corner reflector array 3, will form the real image P of the object to be projected O; the real image P as shown in FIG. 6 will be observable from oblique directions (the direction of the arrow in the drawing) with respect to the substrate 31, in such positions where the specular surfaces 21 and 22 of the dihedral corner reflectors 2 of the dihedral corner reflector array 3 are visible; more specifically, as light is reflected by two mutually perpendicular specular surfaces 21 and 22, among the components of the light direction, the component that is parallel to the surface of substrate 31 (in other words, the component that is parallel to the optical device plane 3S) shall return in the direction from where it came, whereas the component that is parallel to the surface of the specular surfaces 21 and 22 shall be preserved as it was; as a result, light passing through the dihedral corner reflector array 3 with two reflections shall always pass through a point in a planar symmetric position with respect to the optical device plane 3S; and therefore as light is being emitted in every direction from the object to be projected O as a light source, insofar as those light rays are reflected twice by the dihedral corner reflectors 2 while passing through the dihedral corner reflector array 3, all of them will converge in the same point making it a focus point, and See also at least paragraphs[0021], [0047]-[0053], [0055]-[0062], [0065], and [0068] of Maekawa (i.e., Maekawa teaches a system that includes the object emitting light that passes through a dihedral corner reflector array that reflects the light twice as well as bends the light making a focus point to form a recognizable real mirror image that will appear floating, at same viewing angle of the object, and projected at symmetric position from object, with respect to an optical device plane (i.e., a substrate) of the dihedral corner reflector array)), the detector is disposed in an internal region of the imaging optical system, and on one surface side of the light beam bending plane of the imaging optical system (FIGS. 1-5, 7, and 11, paragraphs[0056]-[0057] of Maekawa teaches for the two cameras 51, for instance digital cameras with solid state imaging devices like CCD or CMOS or such might be used; those cameras 51 might be located at fixed positions inside the enclosure 4 around the object to be projected O facing in the direction of the real image P, so that they can record the light passing directly through the holes 32 in the substrate 31 (direct light) from the area around the real image P that is to be observed; therefore, the user (the user object) U accessing the real image P (see FIG. 1) is recorded by cameras 51; it should be noted that inasmuch the real image P is projected in the upwards direction, cameras 51 located inside the enclosure 4 shall not record the real image P, only the user (user object) U; thereafter, the image recorded by the cameras 51 is inputted to the image processing device 52; and in the image processing device 52 an image processing program and an user object recognition program is running, and based on the image recorded by cameras 51, the image of the user (user object) will be found (in the flowchart of FIG. 7, see step S1), and in the region of triangulating measurement, the three-dimensional position of each point of the user (user object) will determined (step S2), and See also at least paragraphs[0021], [0047]-[0048], [0058]-[0061], and [0065] of Maekawa (i.e., Maekawa teaches a processing device along with the detecting means having two cameras within an enclosure of the floating image interaction device and disposed on a side of an optical device plane of the dihedral reflector array, wherein the processing device includes the processing program and the objection recognition program, and wherein based on the image recorded by the cameras the three-dimensional position of each point of a user object is determined)). Claim 39 is rejected under 35 U.S.C. 103 as being unpatentable over Kaede, in view of Hara, and Malin et al., U.S. Patent Application Publication 2023/0112984 A1 (hereinafter Malin). Regarding claim 39, Kaede and Hara teach the interface device according to claim 1, further comprising (FIG. 1, paragraph[0041] of Kaede teaches the information processing system 1 illustrated in FIG. 1 includes a floating image forming device 10, a control device 20 that controls the floating image forming device 10, and a sensor 30 that detects the position of an object, such as a hand, a finger, a pen, or an electronic device, in the space where the floating image is being displayed (i.e., Kaede teaches an information processing system that includes a floating image forming device and a sensor that detects a position of an object (e.g., Kaede teaches a hand, finger, pen, or electronic device) in space where the floating image is being displayed)); but do not expressly teach a controller to change an angle at which the boundary plane as the plane onto which the aerial image is projected in the virtual space spatially intersects with a display plane of the display device or set an angle at which the boundary plane does not intersect with a display plane of the display device. However, Malin teaches a controller to change an angle at which the boundary plane as the plane onto which the aerial image is projected in the virtual space spatially intersects with a display plane of the display device or set an angle at which the boundary plane does not intersect with a display plane of the display device (50 FIGS. 1-3, 5-8, and 10, paragraph[0209] of Malin teaches the controller 50 is in communication with the 11st sensor assembly (user sensor) 27 and the 2nd sensors assembly (touch sensor) 12, and processes information received from the sensor assemblies 27, 12 to control the interface 1; for example, the controller 50 can receive information from the user sensor 27 indicating the presence of a user and activate the display 11 to generate the floating image 14; in another example, the controller 50 can receive information from the user sensor 27 indicating the height, eye position, and/or a distance to the user, and the controller 50 can control the positional assembly 42 to move one or more of the display 11 and the optical device 13 to position the floating image at an angle or a location that may be best perceived by the user; sensing physical characteristic of the user, including the sensing presence of user in the proximity of the interface 1 is further described in reference to FIGS. 4, 5, and 10; and the controller 50 can provide an output to a device 74 based on information from the touch sensor 12 indicative of the input of a user (e.g., the selection of an item depicted on the floating image 14), and See also at least paragraphs[0207]-[0208], [0210], [0221]-[0222], and [0309] of Malin (i.e., Malin teaches a controller that can control a positional assembly to move one or more of a display and an optical device to position an floating image at an angle or a location that may be best perceived by a user)). Furthermore, Kaede, Hara, and Malin are considered to be analogous art because they are from the same field of endeavor with respect to a detection device, and involve the same problem of forming the detection device for suitably detecting a finger. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Kaede based on Hara and Malin to have a controller a controller to change an angle at which the boundary plane as the plane onto which the aerial image is projected in the virtual space spatially intersects with a display plane of the display device or set an angle at which the boundary plane does not intersect with a display plane of the display device. One reason for the modification as taught by Hara is so the operator can suitably determine that an input operation is recognized by a spatial projection apparatus, by visually confirming that a pointer has reduced its size (paragraph[0044] of Hara). Another reason for the modification as taught by Malin is for generating an image that is perceived by a user to be floating (a “floating image”), positioning the floating image based on sensed user information, and receiving user selections of portions of the floating image, including gestures and inputs made at a distance from the floating image (paragraph[0002] of Malin). Claims 43 and 46 are rejected under 35 U.S.C. 103 as being unpatentable over Dabir et al., U.S. Patent Application Publication 2021/0382563 A1 (hereinafter Dabir), in view of Kaede, and Hara. Regarding claim 43, Dabir teaches an interface device that allows an operation of an application displayed on a display device (100a, 104a FIGS. 1A-2, paragraph[0027] of Dabir teaches in one example, objects or motions detected within zone 112a-1 and/or zone 112a-2 (FIG. 1A) can be interpreted by system 100a as control information; one illustrative example application is a painting and/or picture editing program including a virtual “brush” (or pen, pencil, eraser, stylus, paintbrush or other tool) can apply markings to a virtual “canvas.” In such application(s), zone 112a-1 and/or zone 112a-2 can be designated as a “Menu/Tool selection area” in which the virtual pen and/or brush is not in contact with the virtual “canvas” and in which tool icons and/or menu options appear on screen 104a; inputs of detected objects and/or motions in these zones can be interpreted firstly to make choices of tools, brushes, canvases and/or settings, and See also at least paragraphs[0028]-[0035] of Dabir (i.e., Dabir teaches a system, which includes a three dimensional region of space, that displays tool icons and menu options of an application on a screen)), the interface device comprising: a detector to detect a three-dimensional position of a detection target in a virtual space (200a, 14 FIG. 1, paragraph[0025] of Dabir teaches motion sensing device 200a is capable of detecting position as well as motion of hands and/or portions of hands and/or other detectable objects (e.g., a pen, a pencil, a stylus, a paintbrush, an eraser, other tools, and/or a combination thereof), within a region of space 110a from which it is convenient for a user to interact with system 100a; region 110a can be situated in front of, nearby, and/or surrounding system 100a; while FIG. 1A illustrates devices 200a-1, 200a-2 and 200a-3, it will be appreciated that these are alternative implementations shown in FIG. 1A for purposes of clarity; keyboard 106a and position and motion sensing device 200a are representative types of user input devices; other examples of user input devices (not shown in FIG. 1A) such as, for example, a touch screen, light pen, mouse, track ball, touch pad, data glove and so forth can be used in conjunction with computing environment 100a; accordingly, FIG. 1A is representative of but one type of system implementation; and it will be readily apparent to one of ordinary skill in the art that many system types and configurations are suitable for use in conjunction with the disclosed technology, and See also at least paragraphs[0024], [0026]-[0035], [0081], [0089], [0091], and [0095] of Dabir (i.e., Dabir teaches a sensing device capable of detecting position(s) as well as motion(s) of hands within the three dimensional region of space)) comprising a plurality of operation spaces (112a-1, 112a-2, 114a, 116a, 112b, 114b, 116b, and 214 FIGS. 1A-2, paragraph[0026] of Dabir teaches tower 102a and/or position and motion sensing device 200a and/or other elements of system 100a can implement functionality to logically partition region 110a into a plurality of zones (112a-1, 112a-2, 114a, 116a of FIG. 1A) which can be arranged in a variety of configurations. Accordingly, objects and/or motions occurring within one zone can be afforded differing interpretations than like (and/or similar) objects and/or motions occurring in another zone, and See also at least paragraph[0024]-[0025], [0027]-[0035], [0081], [0089], [0091], and [0095] of Dabir (i.e., Dabir teaches the motion sensing device that is cable of portioning the three dimensional region into a plurality of zones); at least one boundary definer to indicate a boundary position of each of the operation spaces, the at least one boundary definer including one of a line or a plane; and (112a-1, 112a-2, 114a, 116a, 112b, 114b, 116b, and 214 FIGS. 1A-2, paragraph[0035] of Dabir teaches FIG. 2 illustrates a non-tactile interface implementation in which object(s) and/or motion(s) are detected and presence within zonal boundary or boundaries is determined; as show in FIG. 2, one or more zones, including a zone 214, can be defined in space 12 based upon zonal boundaries that can be provided by rule, program code, empirical determination, and/or combinations thereof; positional and/or motion information provided by position and motion sensing device 200 can be used to determine a position A of an object 14 within space 12; generally, an object 14 having an x-coordinate x will be within the x-dimensional boundaries of the zone if xmin≤x≤xmax; if this does not hold true, then the object 14 does not lie within the zone having x-dimensional boundaries of (xmin, xmax); analogously, object 14 with a y-coordinate y and z-coordinate z will be within the y-dimensional boundaries of the zone if ymin≤y≤ymax holds true and will be within the z-dimensional boundaries of the zone if zmin≤z≤zmax holds true; accordingly, by checking each dimension for the point of interest for presence within the minimum and maximum dimensions for the zone, it can be determined whether the point of interest lies within the zone; one method implementation for making this determination is described below in further detail with reference to FIG. 31; and while illustrated generally using Cartesian (x,y,z) coordinates, it will be apparent to those skilled in the art that other coordinate systems, e.g., cylindrical coordinates, spherical coordinates, etc. can be used to determine the dimensional boundaries of the zone(s), and See also at least paragraph[0024]-[0026], [0027]-[0034], [0081], [0089], [0091], and [0095] of Dabir (i.e., Dabir teaches program code for defining zones in a space, which can include the three dimensional region, based on based on zonal boundaries); , wherein ,when the three-dimensional position of the detection target detected by the detector is included in the virtual space, the detection target is enabled to perform a plurality of kinds of operations on the application, (FIGS. 1A-2, paragraphs[0030]-[0031] of Dabir teaches in an implementation, substantially contemporaneous inputs of objects and/or motion in two or more zones can indicate to system 100a that the inputs should be interpreted together; for example, system 100a can detect input(s) of content made by a virtual brush in zone 116a contemporaneous with inputs of commands in zone 112a-1 and/or zone 112a-2; accordingly, the user can employ this mechanism to alter the characteristics (e.g., color, line width, brush stroke, darkness, etc.) of the content input as the content input is being made; while illustrated with examples using adjacent zones for ease of illustration, there is no special need for zones to touch one another; thus in implementations zones can be contiguous, dis-contiguous or combinations thereof; in some implementations, inter-zone spaces can be advantageously interposed between zones to facilitate application specific purposes; further, as illustrated by zone 112a-1 and zone 112a-2, zones need not be contiguous; and in other words, system 100a can treat inputs made in either zone 112a-1 or zone 112a-2 equivalently, or similarly, thereby providing the ability to some implementations to accommodate “handedness” of users , and See also at least paragraph[0024]-[0026], [0027]-[0029], [0032]-[0035], [0081], [0089], [0091], and [0095] of Dabir (i.e., Dabir teaches program code for defining zones in a space, which can include the three dimensional region, based on based on zonal boundaries, wherein the system can detect inputs of a user within a zone that serves as a painting area in order to alter various characteristics of content such as color, line width, brush stroke, and darkness that appear on the screen of the system)); but does not expressly teach a boundary displayer to provide at least one visually recognizable boundary of each of the operation spaces, the boundary displayer including one of a point, a line, or a plane; the operations being associated with the respective operation spaces, and in at least any one of the plurality of operation spaces, a predetermined pointer movement operation to display information of the display device in conjunction with movement of the detection target in the operation space is defined. However, Kaede teaches a boundary displayer to provide at least one visually recognizable boundary of each of the operation spaces, the boundary displayer including one of a point, a line, or a plane; the operations being associated with the respective operation spaces, and in at least any one of the plurality of operation spaces (10; #1 and #2, #2a, and #2b FIGS. 1, 3A-4, 7-8, 11A-11B, 12-13, 18, and 23-26, paragraph[0048] of Kaede teaches the floating image forming device 10 is a device that directly presents the floating image in midair; various methods of directly forming the floating image #1 in midair have already been proposed, and some have been put into practical use; for example, such methods of forming the floating image include a method of using a half-mirror, a method of using a beam splitter, a method of using a micromirror array, a method of using a microlens array, a method of using a parallax barrier, and a method of using plasma emission; and the method by which the floating image forming device 10 presents the floating image in midair may be any of the methods listed here or some other method that becomes usable in the future, and See also at least paragraphs[0046]-[0047], [0054]-[0055], [0065]-[0081], [0120]-[0128], [0156]-[0163], [0185]-[0191], [0201], [0204-[0205], [0208], and [0221]-[0222] of Kaede (e.g., Kaede teaches a floating image forming device that directly presents at least one floating image, which has an outer border viewable to a user, in midair, wherein at least one detection region that includes the floating image(s) capable of having buttons, divided sub-regions, and folder icons each with boundaries and each having different operational functions responsive to a touch, wherein the sub regions correspond to surfaces of the floating image(s))), a predetermined pointer; operation to display information of the display device in conjunction with movement of the detection target in the operation space is defined (FIGS. 1, 3A-4,7-8, 11A-11B, 12-13, 18, and 23-26, paragraphs[0208] of Kaede teaches in the example of FIG. 26, the sales file is surrounded by a thick line to indicate that the sales file has been selected; and the selection of a file may be performed by a gesture in midair (such as by moving the fingertip to pass through the region where the sales file is being displayed, for example), or by performing an operation with a device not illustrated, such as an operation performed on a touch panel or an operation performed using a mouse or a trackpad, and See also at least paragraphs[0054]-[0055], [0065]-[0081], [0120]-[0128], [0156]-[0163], [0185]-[0191], [0201], [0204]-[0207], [0209]-[0212], [0221]-[0222] of Kaede (i.e., Kaede teaches a sales file icon being surrounded by a thick line, responsive to user selection via a midair gesture or a touch panel, to indicate the sales file has been selected)); but the combination of Dabir and Kaede still do not expressly teach movement. However, Hara teaches movement (FIGS. 6A-6B, paragraphs[0121]-[0122] of Hara teaches in FIG. 6A, the pointer image 4a has a circular shape; the processor 60 changes the size of the circular shape of the pointer image 4a in such a manner as to be reduced to converge on the position of a finger 51 (substantially at a center of the spatial projected image 4a in FIG. 6A) as the finger 51 enters deeper into the detection target region 41; other images of icons such as a polygonal shape, an arrow, and the like can be used as the pointer image 4a1; additionally, the shape of the pointer image 4a1 is not limited to a closed figure such as the circular pointer image 4a1, and hence, a partially opened figure (for example, a broken line, a chain line) may be used for the pointer image 4a1; and the operator can determine that the input operation is recognized by the spatial projection apparatus 100 by visually confirming that the pointer image 4a1 has reduced its size, and See also at least paragraphs[0043], and [0045]-[0046] of Hara (i.e., Hara teaches a pointer image that converges on a position of a finger)). Furthermore, Dabir, Kaede, and Hara are considered to be analogous art because they are from the same field of endeavor with respect to a detection device, and involve the same problem of forming the detection device for suitably detecting a position of finger. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Dabir based on Kaede and Hara to have a boundary displayer to provide at least one visually recognizable boundary of each of the operation spaces, the boundary displayer including one of a point, a line, or a plane; the operations being associated with the respective operation spaces, and in at least any one of the plurality of operation spaces, a predetermined pointer movement operation to display information of the display device in conjunction with movement of the detection target in the operation space is defined. One reason for the modification as taught by Kaede is to suitably present an image floating in midair (paragraph[0004] of Kaede). Another reason for the modification as taught by Hara is so the operator can suitably determine that an input operation is recognized by a spatial projection apparatus, by visually confirming that a pointer has reduced its size (paragraph[0044] of Hara). The same motivation and rationale to combine for claim 43 mentioned above, in light of corresponding statement of grounds of rejection, applies to each respective dependent claim mentioned in the corresponding statement of grounds of rejection. Regarding claim 46, Dabir, Kaede, and Hara teach the interface device according to claim 43, wherein the boundary displayer is a projector projecting the aerial image in the virtual space, the projector can project the aerial image as indicating only the boundary position of the operation space (FIGS. 1, 3A-4, 7-8, and 18, paragraph[0048] of Kaede teaches the floating image forming device 10 is a device that directly presents the floating image in midair; various methods of directly forming the floating image #1 in midair have already been proposed, and some have been put into practical use; for example, such methods of forming the floating image include a method of using a half-mirror, a method of using a beam splitter, a method of using a micromirror array, a method of using a microlens array, a method of using a parallax barrier, and a method of using plasma emission; and the method by which the floating image forming device 10 presents the floating image in midair may be any of the methods listed here or some other method that becomes usable in the future, and See also at least paragraphs[0046]-[0047], [0054]-[0055], [0065]-[0081], [0156]-[0163], [0185]-[0191], and [0201] of Kaede (e.g., Kaede teaches a floating image forming device that directly presents at least one floating image, which as shown in a least FIG. 1 has an outer border viewable to the user, in midair)). Claim 47 is rejected under 35 U.S.C. 103 as being unpatentable over Kaede, in view of, Hara, and Dabir. Regarding claim 47, Kaede and Hara teach the interface device according to claim 1, but do not expressly teach wherein neighboring operation spaces in each of the operation spaces correspond to the operations having continuity to each other, the neighboring operation spaces can be recognized simultaneously for users. However, Dabir teaches wherein neighboring operation spaces in each of the operation spaces correspond to the operations having continuity to each other, the neighboring operation spaces can be recognized simultaneously for users (FIGS. 1A-2, paragraphs[0030]-[0031] of Dabir teaches in an implementation, substantially contemporaneous inputs of objects and/or motion in two or more zones can indicate to system 100a that the inputs should be interpreted together; for example, system 100a can detect input(s) of content made by a virtual brush in zone 116a contemporaneous with inputs of commands in zone 112a-1 and/or zone 112a-2; accordingly, the user can employ this mechanism to alter the characteristics (e.g., color, line width, brush stroke, darkness, etc.) of the content input as the content input is being made; while illustrated with examples using adjacent zones for ease of illustration, there is no special need for zones to touch one another; thus in implementations zones can be contiguous, dis-contiguous or combinations thereof; in some implementations, inter-zone spaces can be advantageously interposed between zones to facilitate application specific purposes; further, as illustrated by zone 112a-1 and zone 112a-2, zones need not be contiguous; and in other words, system 100a can treat inputs made in either zone 112a-1 or zone 112a-2 equivalently, or similarly, thereby providing the ability to some implementations to accommodate “handedness” of users , and See also at least paragraph[0024]-[0026], [0027]-[0029], [0032]-[0035], [0081], [0089], [0091], and [0095] of Dabir (i.e., Dabir teaches program code for defining zones in a space, which can include the three dimensional region, based on based on zonal boundaries, wherein the system can detect inputs of a user within a zone that serves as a painting area in order to alter various characteristics of content such as color, line width, brush stroke, and darkness that appear on the screen of the system)). Furthermore, Kaede, Hara, and Dabir are considered to be analogous art because they are from the same field of endeavor with respect to a detection device, and involve the same problem of forming the detection device for suitably detecting a position of a finger. Therefore, before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to modify the system of Kaede based on Hara and Dabir wherein neighboring operation spaces in each of the operation spaces correspond to the operations having continuity to each other, the neighboring operation spaces can be recognized simultaneously for users. One reason for the modification as taught by Hara is so the operator can suitably determine that an input operation is recognized by a spatial projection apparatus, by visually confirming that a pointer has reduced its size (paragraph[0044] of Hara). Another reason for the modification as taught by Dabir is to have a system that detects a portion of a hand and/or other detectable object in a region of space monitored by a 3D sensor (ABSTRACT of Dabir). Potentially Allowable Subject Matter Claims 50 and 51 would be allowable if rewritten to overcome applicable objection(s) indicated above, because for each of the claims the prior art references of record do not teach the combination of all element limitations as presently claimed. For example, in regard to claim 50, the prior art of record at least does not expressly teach concept of the operation information outputter to output the operation information for performing the predetermined operation on an application displayed on a display apparatus, using at least a result of determination performed by the determiner, wherein each of the operation spaces corresponds to at least one operation of a plurality of kinds of operations using one of a mouse or a touch panel for the application, and different successive operations of the operations for the application are associated with adjacent operation spaces among the respective operation spaces. In addition example, in regard to claim 51, the prior art of record at least does not expressly teach concept of the operation information outputter to output the operation information for performing the predetermined operation on an application displayed on a display apparatus, using at least a result of determination performed by the determiner, wherein the operation information outputter identifies movement of the detection target on a basis of the three-dimensional position of the detection target, and associates movement of the detection target in each of the operation spaces or across each of the operation spaces with at least one operation of a plurality of kinds of operations for the application using one of a mouse or a touch panel, and interlock a predetermined operation for the application with the movement of the detection target. In addition, claims 27, 32-35, 38, 40-42, 44, and 49 would be allowable if rewritten to overcome applicable rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ) 2nd paragraph and objection(s) indicated above, and if rewritten in independent form including all of the limitations of the base claim and any intervening, because for each of the claims 27, 32-35, 38, 40-42, 44, and 49 the prior art references of record do not teach the combination of all element limitations as presently claimed. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure and include the following: Nishioka, U.S. Patent Application Publication 2015/0077399 A1 (hereinafter Nishioka) teaches a spatial coordinate identification device, and a technique for detecting a virtual touch operation performed by a user relative to an aerial image formed in space. Rakshit et al., U.S. Patent 11,188,154 B2 (hereinafter Rakshit) teaches projecting holographic images based on a current context. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDUL-SAMAD A ADEDIRAN whose telephone number is (571)272-3128. The examiner can normally be reached Monday through Thursday, 8:00 am to 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDUL-SAMAD A ADEDIRAN/Primary Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

Feb 28, 2025
Application Filed
Jan 25, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604613
DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12592188
PIXEL CIRCUITS AND DISPLAY PANELS
2y 5m to grant Granted Mar 31, 2026
Patent 12586527
PIXEL DRIVING CIRCUIT, DISPLAY DEVICE INCLUDING THE SAME, AND METHOD FOR DRIVING THE DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586496
DISPLAY DEVICE AND METHOD OF DRIVING A DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12572202
Determining IPD By Adjusting The Positions Of Displayed Stimuli
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
92%
With Interview (+13.9%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 617 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month