DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities:
In paragraph [0001] status of 18/312,107 needs to be updated to include the US Patent number in the same manner as the remainder of this paragraph references US Application number and corresponding US Patent number.
Appropriate correction is required.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
Claims 1-35 have been interpreted under 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) to not invoke 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) claim interpretation.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-7, 9-23, and 25-35 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-13, 16, 22-27, and 29 of U.S. Patent No. 11,727,625 in view of Stafford et al., US Patent Application Publication No. 2016/0260251, hereinafter Stafford.
Claim 1 is a broadened combination of the Patent’s dependent claims 6 and 16 into broadened Patent’s independent claim 1. In this analysis dependent virtual assistant claim 6 is the base claim modified by dependent display claim 16.
Stafford describes and provides motivation to claim “output the virtual content for display at the placement position”, refer to FIGs. 10 and 11, the Abstract, paragraphs [0078]-[0086], and claims 1, 4, 13, and 23.
It would have been obvious to modify already patented claims to modify the Patent’s claim 6 to claim “output the virtual content for display at the placement position” which is present in the Patent’s claim 16 for the benefits expressed by Stafford, refer to the Abstract, paragraphs [0078]-[0086], and claims 1, 4, 13, and 23.
Dependent claims 2-4 add to the limitations present in independent claim 1 “extended reality headset”, “extended reality headset is a virtual reality headset”, and “extended reality headset is an augmented reality headset” respectively which are present in Stafford.
It would have been obvious to modify already patented claims to modify the Patent’s claim 6 to claim “extended reality headset”, “extended reality headset is a virtual reality headset”, and “extended reality headset is an augmented reality headset” which is present in the Patent’s claim 16 for the benefits expressed by Stafford, refer to the Abstract, paragraphs [0078]-[0086], and claims 1, 4, 13, and 23.
Dependent claims 5-7 and 9-14 correspond respectively to the Patent’s dependent claims 3-5 and 7-12.
Dependent claim 15 corresponds to the Patent’s dependent claim 13.
Dependent claim 16 corresponds to the Patent’s limitation present in independent claim 1.
Dependent claim 17 corresponds to the Patent’s dependent claim 9.
Dependent claim 18 corresponds to the Patent’s dependent claim 10.
Dependent claims 19-21 corresponds to the Patent’s dependent claim 13.
Claim 22 is a broadened combination of the Patent’s dependent method claim 23 and apparatus claim 16 into broadened Patent’s independent claim 18. Claim 22 is present in the Patent’s dependent claim 23 but the Patent’s method claims lack “displaying the virtual content at the placement position” however this is present in the Patent’s apparatus claim 16.
Stafford describes and provides motivation to claim “output the virtual content for display at the placement position”, refer to FIGs. 10 and 11, the Abstract, paragraphs [0078]-[0086], and claims 1, 4, 13, and 23.
It would have been obvious to modify already patented claims to modify the Patent’s claim 23 to claim “output the virtual content for display at the placement position” which is present in the Patent’s apparatus claim 16 for the benefits expressed by Stafford, refer to the Abstract, paragraphs [0078]-[0086], and claims 1, 4, 13, and 23.
Dependent claims 23 and 25-29 correspond respectively to the Patent’s dependent claims 22, 24-27, and 29.
Dependent claim 30 corresponds to the Patent’s dependent claim 26.
Dependent claim 31 corresponds to the Patent’s limitation present in independent claim 18.
Dependent claim 32 corresponds to the Patent’s dependent claim 26.
Dependent claim 33 corresponds to the Patent’s dependent claim 27.
Dependent claims 34 and 35 correspond to the Patent’s dependent claim 26.
Claims 8 and 24 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 6 and 23 of U.S. Patent No. 11,727,625 in view of Stafford et al., US Patent Application Publication No. 2016/0260251, hereinafter Stafford, as set forth above for each of independent claims 1 and 22 further in view of Chou et al., US Patent Application Publication No. 2012/0281059.
U.S. Patent No. 11,727,625 fails to claim the limitations present in each of dependent claims 8 and 24 “wherein the virtual assistant includes animated digital content and synthesized audio content”.
Chou describes and provides motivation to claim “wherein the virtual assistant includes animated digital content and synthesized audio content”, regarding animated refer to the abstract and to paragraphs [0008], [0036], [0039], and [0041] and regarding audio refer to the abstract and to paragraphs [0007], [0029], [0035], [0036], and [0039] and claims 10, 11, 17, and 19.
It would have been obvious to modify already patented claims 6 and 23 to claim the limitation in each of dependent claims 8 and 24 for the benefits expressed by Chou, refer to the abstract and paragraphs [0002] and [0036].
The following table summarizes the correlation of this application’s pending claims to the Patented claims of U.S. Patent No. 11,727,625.
This app’s claims
1
2
3
4
5
6
7
8
U.S. Patent 11,727,625 claims
1+6+16
1+6+16
1+6+16
1+6+16
3
4
5
NONE
This app’s claims
9
10
11
12
13
14
15
16
U.S. Patent 11,727,625 claims
7
8
9
10
11
12
13
1
This app’s claims
17
18
19
20
21
22
23
24
U.S. Patent 11,727,625 claims
9
10
13
13
13
18+23+16
22
NONE
This app’s claims
25
26
27
28
29
30
31
32
U.S. Patent 11,727,625 claims
24
25
26
27
29
26
18
26
This app’s claims
33
34
35
U.S. Patent 11,727,625 claims
27
26
26
Claims 1-7, 9, 11-23, 25, 27-35 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-6, 9-13, 16, 18, 22, 23, 25, 26, 28, and 33 of U.S. Patent No. 11,200,729 in view of Stafford et al., US Patent Application Publication No. 2016/0260251, hereinafter Stafford.
Claim 1 is a combination of dependent claims 11, 9, and 13 into independent claim 1. In this analysis dependent virtual assistant claim 11 is the base claim modified by dependent pose claim 9 and modified by dependent display claim 13.
Stafford describes and provides motivation to claim “a pose of the apparatus; and” and “output the virtual content for display at the placement position”, refer to FIGs. 10 and 11, the Abstract, paragraphs [0078]-[0086], and claims 1, 4, 13, and 23.
It would have been obvious to modify already patented claims to modify the Patent’s claim 11 to claim “a pose of the apparatus; and” which is present the Patent’s claim 9 and “output the virtual content for display at the placement position” which is present the Patent’s claim 13 for the benefits expressed by Stafford, refer to the Abstract, paragraphs [0078]-[0086], and claims 1, 4, 13, and 23.
Dependent claims 2-4 add to the limitations present in independent claim 1 “extended reality headset”, “extended reality headset is a virtual reality headset”, and “extended reality headset is an augmented reality headset” respectively which are present in Stafford.
It would have been obvious to modify already patented claims to modify the Patent’s claim 11 to claim “extended reality headset”, “extended reality headset is a virtual reality headset”, and “extended reality headset is an augmented reality headset” which is present the Patent’s claim 16 for the benefits expressed by Stafford, refer to the Abstract, paragraphs [0078]-[0086], and claims 1, 4, 13, and 23.
Dependent claims 5-6 correspond respectively to the Patent’s dependent claims 3-4.
Dependent claim 7 corresponds to the Patent’s dependent claim 11.
Dependent claim 9 corresponds to the Patent’s dependent claim 16.
Dependent claims 11-14 correspond respectively to the Patent’s dependent claims 5-8.
Dependent claim 15 corresponds to the Patent’s dependent claim 13.
Dependent claim 16 corresponds to the Patent’s dependent claim 1.
Dependent claims 17-18 correspond respectively to the Patent’s dependent claims 5-6.
Dependent claims 19-21 correspond respectively to the Patent’s dependent claims 13.
Claim 22 is present in the Patent’s dependent claims 26 and 28 but the Patent’s method claims lack “displaying the virtual content at the placement position” which is present in apparatus claim 13 but not the method claims.
Claim 22 is a broadened combination of the Patent’s dependent method claims 26 and 28 and apparatus 13 into broadened Patent’s independent claim 18. Claim 22 is present in the Patent’s dependent claims 26 and 28 but the Patent’s method claims lack “displaying the virtual content at the placement position” however this is present in the Patent’s apparatus claim 13 but not the method claims.
Stafford describes and provides motivation to claim “a pose of the device; and” and “displaying the virtual content at the placement position”, refer to FIGs. 10 and 11, the Abstract, paragraphs [0078]-[0086], and claims 1, 4, 13, and 23.
It would have been obvious to modify already patented claims to modify the Patent’s claim 28 to claim “a pose of the device; and” and “displaying the virtual content at the placement position” which is present the Patent’s claim 13 for the benefits expressed by Stafford, refer to the Abstract, paragraphs [0078]-[0086], and claims 1, 4, 13, and 23.
Dependent claim 23 corresponds to the Patent’s dependent claim 28.
Dependent claim 25 corresponds to the Patent’s dependent claim 33.
Dependent claims 27-28 corresponds respectively to the Patent’s dependent claims 22-23.
Dependent claim 29 corresponds to the Patent’s dependent claim 25.
Dependent claim 30 corresponds to the Patent’s dependent claim 22.
Dependent claim 31 corresponds to the Patent’s dependent claim 18.
Dependent claims 32-33 corresponds respectively to the Patent’s dependent claims 22-23.
Dependent claims 34-35 corresponds to the Patent’s dependent claim 22.
Claims 8 and 24 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 11 and 28 of U.S. Patent No. 11,200,729 in view of Stafford et al., US Patent Application Publication No. 2016/0260251, hereinafter Stafford, as set forth above for each of independent claims 1 and 22 further in view of Chou et al., US Patent Application Publication No. 2012/0281059.
U.S. Patent No. 11,727,625 fails to claim the limitations present in each of dependent claims 8 and 24 “wherein the virtual assistant includes animated digital content and synthesized audio content”.
Chou describes and provides motivation to claim “wherein the virtual assistant includes animated digital content and synthesized audio content”, regarding animated refer to the abstract and to paragraphs [0008], [0036], [0039], and [0041] and regarding audio refer to the abstract and to paragraphs [0007], [0029], [0035], [0036], and [0039] and claims 10, 11, 17, and 19.
It would have been obvious to modify already patented claims to claim the limitation in each of dependent claims 8 and 24 for the benefits expressed by Chou, refer to the abstract and paragraphs [0002] and [0036].
The following table summarizes the correlation of this application’s pending claims to the Patented claims of U.S. Patent No. 11,200,729.
This app’s claims
1
2
3
4
5
6
7
8
U.S. Patent 11,200,729 claims
1+8+9+11+13
1+8+9+11+13
1+8+9+11+13
1+8+9+11+13
3
4
11
NONE
This app’s claims
9
10
11
12
13
14
15
16
U.S. Patent 11,200,729 claims
16
NONE
5
6
7
8
13
1
This app’s claims
17
18
19
20
21
22
23
24
U.S. Patent 11,200,729 claims
5
6
13
13
13
18+26+28+13
28
NONE
This app’s claims
25
26
27
28
29
30
31
32
U.S. Patent 11,200,729 claims
33
NONE
22
23
25
22 NONE
18
22
This app’s claims
33
34
35
U.S. Patent 11,200,729 claims
23
22
22
The following table correlates this application’s pending claims filed on 06/21/2024 with the Patented claims of U.S. Patent No. 11,200,729 and with the Patented claims of U.S. Patent No. 11,727,625.
Claims filed on 06/21/2024
1. An apparatus comprising:
at least one memory; and
at least one processor coupled to the at least one memory, the at least one processor configured to:
obtain at least one image of a real-world scene;
generate map data based on the at least one image;
perform an analysis on the at least one image to identify an object depicted in the real-world scene;
determine a placement position for virtual content associated with a virtual assistant
in an augmented reality environment, the placement position being based on the map data and
a pose of the apparatus; and
output the virtual content for display at the placement position.
Virtual assistant in pending claim 1 in the claimed: “determine a placement position for virtual content associated with a virtual assistant”.
2. The apparatus of claim 1, wherein the apparatus is an extended reality headset.
3. The apparatus of claim 2, wherein the extended reality headset is a virtual reality headset.
4. The apparatus of claim 2, wherein the extended reality headset is an augmented reality headset.
5. The apparatus of claim 1, wherein the at least one processor is configured to output the placement position for transmission to a device.
6. The apparatus of claim 5, wherein the at least one processor is configured to output the placement position for transmission to the device via a server.
7. The apparatus of claim 1, wherein the virtual content is associated with the object.
In pending claim 1
8. The apparatus of claim 1, wherein the virtual assistant includes animated digital content and synthesized audio content.
9. The apparatus of claim 1, wherein the analysis of the at least one image to identify the object comprises a semantic analysis.
10. The apparatus of claim 1, wherein the pose of the apparatus is based on one or more inertial sensor measurements obtained by the apparatus.
11. The apparatus of claim 1, wherein the at least one processor is configured to determine the placement position for the virtual content based on a user input.
12. The apparatus of claim 11, wherein the user input is a hand gesture.
13. The apparatus of claim 12, wherein the hand gesture is received via a touch-sensitive surface.
14. The apparatus of claim 12, wherein the user input is associated with a position in the augmented reality environment.
15. The apparatus of claim 1, wherein the at least one processor is configured to:
determine an updated placement position for the virtual content based on a user input; and
output the virtual content for display at the updated placement position.
16. The apparatus of claim 1, wherein the at least one processor is configured to determine the placement position for the virtual content associated with the virtual assistant in the augmented reality environment further based on the identified object.
17. The apparatus of claim 1, wherein the at least one processor is configured to receive a user input associated with the virtual assistant.
18. The apparatus of claim 17, wherein the user input comprises at least one of a gesture or speech.
19. The apparatus of claim 17, wherein the at least one processor is configured to output feedback to a user in response to the user input.
20. The apparatus of claim 19, wherein the feedback includes display content output via a display.
21. The apparatus of claim 20, further comprising the display.
22. A method performed at a device, the method comprising:
obtaining at least one image of a real-world scene;
generating map data based on the at least one image;
performing an analysis on the at least one image to identify an object depicted in the real-world scene;
determining a placement position for virtual content associated with a virtual assistant in an augmented reality environment, the placement position being based on the map data and
a pose of the device; and
displaying the virtual content at the placement position.
Virtual assistant in pending claim 22 in the claimed: “determining a placement position for virtual content associated with a virtual assistant”.
23. The method of claim 22, wherein the virtual content is associated with the object.
In pending claim 22.
24. The method of claim 22, wherein the virtual assistant includes animated digital content and synthesized audio content.
25. The method of claim 22, wherein the analysis of the at least one image to identify the object comprises a semantic analysis.
26. The method of claim 22, wherein the pose of the device is based on one or more inertial sensor measurements.
27. The method of claim 22, further comprising
determining the placement position for the virtual content based on a user input.
28. The method of claim 27, wherein the user input is a hand gesture.
29. The method of claim 28, wherein
the user input is associated with a position in the augmented reality environment.
30. The method of claim 22, further comprising:
determining an updated placement position for the virtual content based on a user input; and
displaying the virtual content at the updated placement position.
31. The method of claim 22, further comprising
determining the placement position for the virtual content associated with the virtual assistant in the augmented reality environment further based on the identified object.
32. The method of claim 22, further comprising
receiving a user input associated with the virtual assistant.
33. The method of claim 32, wherein the user input comprises at least one of a gesture or speech.
34. The method of claim 32, further comprising
outputting feedback to a user in response to the user input.
35. The method of claim 34, wherein
the feedback includes display content output via a display.
U.S. Patent #11,727,625
Claims
1. An apparatus comprising:
a computer readable storage medium; and
a processor coupled to the computer readable storage medium, the processor configured to:
obtain, from a first device, first map data, wherein the first map data is based on at least one image of a real-world scene imaged by a camera of the first device;
obtain, from a second device, second map data;
produce correlated map data based on the first map data and the second map data;
perform an analysis on the at least one image to identify an object depicted in the real-world scene;
determine a placement position for an item of virtual content
Claimed in patented claim 6 is virtual assistant.
in an augmented reality environment, the placement position being based on the correlated map data, the identified object, and
a pose of the first device;
and
transmit the determined placement position to one or more of the first device and the second device.
6. The apparatus of claim 5, wherein the virtual content comprises a virtual assistant.
3. The apparatus of claim 1, wherein the apparatus is the first device, and wherein the placement position is transmitted to the second device.
4. The apparatus of claim 3, wherein the placement position is transmitted to the second device via a server.
5. The apparatus of claim 3, wherein the virtual content is associated with the object.
6. The apparatus of claim 5, wherein the virtual content comprises a virtual assistant.
7. The apparatus of claim 5, wherein the analysis of the at least one image to identify the object comprises a semantic analysis.
8. The apparatus of claim 5, wherein the pose of the first device is based on one or more inertial sensor measurements received by the first device.
9. The apparatus of claim 5, wherein the processor is configured to invoke a placement position of the item of virtual content based on a received user input.
10. The apparatus of claim 9, wherein the user input is a hand gesture.
11. The apparatus of claim 10, wherein the hand gesture is received via a touch-sensitive surface.
12. The apparatus of claim 11, wherein the user input is associated with a position in the augmented reality environment.
For 15: claim 13/9/5/3/1
Claimed in patent claim 1:
“in an augmented reality environment, the placement position being based on the correlated map data, the identified object, and”.
Claimed in claims 6+9:
6. The apparatus of claim 5, wherein the virtual content comprises a virtual assistant.
9. The apparatus of claim 5, wherein the processor is configured to invoke a placement position of the item of virtual content based on a received user input.
10. The apparatus of claim 9, wherein the user input is a hand gesture.
For 19: claim 13/9/5/3/1
For 20: claim 13/9/5/3/1
For 21: claim 13/9/5/3/1
18. A computer-implemented method comprising:
obtaining, from a first device, first map data, wherein the first map data is based on at least one image of a real-world scene imaged by a camera of the first device;
obtaining, from a second device, second map data;
producing correlated map data based on the first map data and the second map data;
performing an analysis on the at least one image to identify an object depicted in the real-world scene;
determining a placement position for an item of virtual content in an augmented reality environment, the placement position being based on the correlated map data, the identified object, and
Claimed in patented claim 23 is virtual assistant.
a pose of the first device; and
transmitting the determined placement position to one or more of the first device and the second device.
23. The method of claim 22, wherein the virtual content comprises a virtual assistant.
22. The method of claim 20, wherein the virtual content is associated with the object.
23. The method of claim 22, wherein the virtual content comprises a virtual assistant.
24. The method of claim 22, wherein the analysis of the at least one image to identify the real-world object comprises a semantic analysis.
25. The method of claim 22, wherein the pose of the first device is based on one or more inertial sensor measurements received by the first device.
26. The method of claim 22, further comprising invoking a placement position of the item of virtual content based on a received user input.
27. The method of claim 26, wherein the user input is a hand gesture.
28. The method of claim 27, wherein the hand gesture is received via a touch-sensitive surface.
29. The method of claim 28, wherein the user input is associated with a position in the augmented reality environment.
For 30: claim 26/22/20
Claimed in patent claim 18:
“determining a placement position for an item of virtual content in an augmented reality environment, the placement position being based on the correlated map data, the identified object, and ”.
23. The method of claim 22, wherein the virtual content comprises a virtual assistant.
26. The method of claim 22, further comprising invoking a placement position of the item of virtual content based on a received user input.
27. The method of claim 26, wherein the user input is a hand gesture.
For 34: claim 26/22/20
For 35: claim 26/22/20
U.S. Patent #11,200,729
Claims
1. An apparatus comprising:
a computer readable storage medium; and
a processor coupled to the computer readable storage medium, the processor configured to:
obtain, from a first device, first map data, wherein the first map data is based on at least one image of a real-world scene imaged by a camera of the first device;
obtain, from a second device, second map data;
produce correlated map data based on the first map data and the second map data;
perform an analysis on the at least one image to identify an object depicted in the real-world scene;
receive a user input;
determine a placement position for an item of virtual content
Claimed in patented claim 11 is virtual assistant.
in an augmented reality environment, the placement position being based on the correlated map data, the identified object, and
Claimed in patented claim 9.
the received user input; and
transmit the determined placement position to one or more of the first device and the second device.
9. The apparatus of claim 8, wherein the placement position is further based on a pose of the first device.
11. The apparatus of claim 10, wherein the virtual content comprises a virtual assistant.
3. The apparatus of claim 1, wherein the apparatus is the first device, and wherein the placement position is transmitted to the second device.
4. The apparatus of claim 3, wherein the placement position is transmitted to the second device via a server.
10. The apparatus of claim 8, wherein the virtual content is associated with a real world-object in the augmented reality environment.
11. The apparatus of claim 10, wherein the virtual content comprises a virtual assistant.
16. The apparatus of claim 1, wherein the analysis of the at least one image to identify an object depicted in the real-world scene comprises a semantic analysis.
5. The apparatus of claim 3, wherein the processor is further configured to invoke a placement of the item of virtual content based on the received user input.
6. The apparatus of claim 5, wherein the user input is a hand gesture.
7. The apparatus of claim 6, wherein the hand gesture is received via a touch-sensitive surface.
8. The apparatus of claim 7, wherein the user input is associated with a position in the augmented reality environment.
For 15: claim 13/8/7/6/5/3/1
Claimed in patent claim 1: “in an augmented reality environment, the placement position being based on the correlated map data, the identified object, and”.
Claimed in claim 1 + 5:
Claimed in claim 1:
in an augmented reality environment, the placement position being based on the correlated map data, the identified object, and the received user input; and
5. The apparatus of claim 3, wherein the processor is further configured to invoke a placement of the item of virtual content based on the received user input.
6. The apparatus of claim 5, wherein the user input is a hand gesture.
For 19: claim 13/8/7/6/5/3/1
For 20: claim 13/8/7/6/5/3/1
For 21: claim 13/8/7/6/5/3/1
18. A computer-implemented method comprising:
obtaining, from a first device, first map data, wherein the first map data is based on at least one image of a real-world scene imaged by a camera of the first device;
obtaining, from a second device, second map data;
producing correlated map data based on the first map data and the second map data;
performing an analysis on the at least one image to identify an object depicted in the real-world scene;
receiving a user input;
determining a placement position for an item of virtual content in an augmented reality environment, the placement position being based on the correlated map data, the identified object, and
Claimed in patented claim 28 is virtual assistant.
Claimed in patented claim 26.
the received user input; and
transmitting the determined placement position to one or more of the first device and the second device.
26. The method of claim 25, wherein the placement position is further based on a pose of the first device.
28. The method of claim 27, wherein the virtual content comprises a virtual assistant.
27. The method of claim 25, wherein the virtual content is associated with a real world-object in the augmented reality environment.
28. The method of claim 27, wherein the virtual content comprises a virtual assistant.
33. The method of claim 18, wherein the analysis of the at least one image to identify an object depicted in the real-world scene comprises a semantic analysis.
22. The method of claim 20, further comprising invoking a placement of the item of virtual content based on the received user input.
23. The method of claim 22, wherein the user input is a hand gesture.
24. The method of claim 23, wherein the hand gesture is received via a touch-sensitive surface.
25. The method of claim 24, wherein the user input is associated with a position in the augmented reality environment.
For 30: claim 22/20/18
Claimed in patent claim 18: “determining a placement position for an item of virtual content in an augmented reality environment, the placement position being based on the correlated map data, the identified object, and”.
28. The method of claim 27, wherein the virtual content comprises a virtual assistant.
22. The method of claim 20, further comprising invoking a placement of the item of virtual content based on the received user input.
23. The method of claim 22, wherein the user input is a hand gesture.
For 34: claim 22/20/18
For 35: claim 22/20/18
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gorur Sheshagiri et al., US Patent No. 12,056,808, is the corresponding patent of this application’s parent application 18/312,107 referenced in this application’s paragraph [0001].
Lee et al., US Patent Application No. 2018/0211399, describes requirements in the fields of Virtual Reality (VR) and Augmented Reality (AR), refer to paragraph [0051].
[0051] In the fields of Virtual Reality (VR) and Augmented Reality (AR) (which seeks to transformatively combine the real-world with computer-generated imagery or visual indicia), such a system must be able to determine location and pose information of the wearer, identify surroundings, generate a reliable 3D map including depth information (often-times from a single monocular camera devoid of native depth information), and identify different objects, surfaces, shapes, and regions of the surroundings. Only then can a VR/AR system generate reconstructed imagery and project it into the field of view of a user in precise geometric and temporal alignment with the actual world surrounding the user to maintain an illusion of uniformity.
Seichter et al., US Patent Application No. US 2016/0140763, describes detecting user hand movement and accordingly determining placement of virtual contents, refer to FIG. 1 and to paragraphs [0005]-[0008] and [0029].
Hildreth, US Patent Application No. 2008/0273755, describes changes to position of a user’s finger map to changes of a position of a corresponding virtual object, refer to paragraphs [0046], [0047], [0049], and [0072].
Allowable Subject Matter
Claims 1-35 would be allowable if a proper terminal disclaimer is filed.
The following is a statement of reasons for the indication of allowable subject matter:
The prior art of record fails to teach or suggest very similar limitations discussed in the reasons for allowance present in parent US Patent Application No. 17/456,370 and present in parent US Patent Application No. 17/031,315.
Claims 1-21:
The prior art of record fails to teach or suggest in the context of independent claim 1 very “determine a placement position for virtual content associated with a virtual assistant in an augmented reality environment, the placement position being based on the map data and a pose of the apparatus; and”.
Claims 22-35:
The prior art of record fails to teach or suggest in the context of independent claim 22 “determining a placement position for virtual content associated with a virtual assistant in an augmented reality environment, the placement position being based on the map data and a pose of the device; and”.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEFFERY A BRIER whose telephone number is (571)272-7656. The examiner can normally be reached on Mon-Fri from 8:30am-3:00pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao M Wu, can be reached at telephone number 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
JEFFERY A. BRIER
Primary Examiner
Art Unit 2613
/JEFFERY A BRIER/Primary Examiner, Art Unit 2613