Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
This office Action is in response to an application filed on 01/17/2025 CON of 18/395,960 12/26/2023 PAT 12232558, which is a CON of 18/235,816 08/19/2023 PAT 11910865, which is a CON of 17/028,956 09/22/2020 PAT 11730226, which is a CON of 16/666,031 10/28/2019 PAT 10786033, which has PRO 62/752,089 10/29/2018, in which claims 1-6 are pending and are being examined.
Examiner’s Note
Claims 1-2 refer to "A headwear”, Claims 3-4 refer to " A headwear”, and Claims 5-6 refer to "A headwear”. It is requested to keep the scope of all the independent claims similar for advancing the prosecution.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 5 is rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over Claim 1 of US patent US 11,730,226 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the subject matter claimed in the instant application is disclosed in the patent and is covered by the patent since the patent and the application are claiming common subject matter, below is a list of limitations that perform the same function, however, different terminology may be used in both sets to describe the limitations, as follows, Claim 1 is used as an example to analyze the common subject matter:
US patent US 11,730,226 B2
Instant Application:-19/031,177
1. A helmet configured to be worn by a first user, comprising: an augmented reality (“AR”) interface configured to visually present identifiers indicating an identity, position or other information about one or more users; one or more microphones; a position tracker that monitors a first position of the first user, wherein the position tracker is a tilt sensor; a wireless antenna that at least: receives a second position information of a second headwear apparatus worn by a second user; receives a third position information of a third headwear apparatus worn by a third user; sends first audio data generated by the microphone; and receives second audio data from the second headwear apparatus; a gaze tracking imaging element configured to monitor a gaze direction of the first user; one or more processors; and a memory storing program instructions that when executed by the one or more processors cause the one or more processors to at least: determine, based at least in part on the gaze direction of the first user and one or more of the first position generated by the position tracker, the second position information, or the third position information, that the first user wearing is looking in a direction of the second user; in response to a determination that the first user is looking in the direction of the second user: establish, with the wireless antenna, a connection with the second headwear apparatus worn by the second user; send the first audio data to the second headwear apparatus without sending the first audio data to the third headwear apparatus; receive, from the third headwear apparatus, the third audio data; in response to receipt of the second audio data, present, on the AR interface, an indication that the second audio data is received from the third headwear apparatus to indicate that the third user is speaking; and receive visual information comprising an AR motion analytic view.
5. A headwear configured to be worn by a first user, comprising: a frame; one or more lenses; an augmented reality ("AR") display; one or more sensors to track objects in the first user's environment, wherein the sensor is selected from the group consisting of cameras, LiDAR sensors, depth sensors, infrared sensors, and any combination thereof; an augmented reality ("AR") interface configured to visually present identifiers indicating an identity, position or other information about one or more objects; one or more processors; and a memory storing program instructions that when executed by the one or more processors cause the one or more processors to at least establishing an audio connection between the first headwear and a second headwear worn by the second user; sending, via the audio connection, a first audio data from the first headwear to the second headwear for output by the second headwear, receiving at the first headwear and via the audio connection, a second audio data sent from the second headwear; indicating on the AR interface of the first headwear that second audio data is received from the second headwear and is of the second user speaking, receiving, at the first headwear, visual information, outputting at the first headwear the second audio data, wherein outputting the second audio data such has a virtual sound in a direction of the second user with respect to the first user.
6. The headwear according to claim 5 wherein the lenses are detachably mounted on the frame.
Claim 5 rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim 1 of US patent US 11,730,226 B2.
Referring to claim 5, although US patent US 11,730,226 B2, hereinafter ‘226, does not explicitly disclose what’s claimed in instant application 19/031,177, LiDAR sensors; virtual sound.
However, Patil et al. (US 20180053413 A1), hereinafter Patil, teaches LiDAR sensors ([0062], light detection and ranging); virtual sound ([0021]-[0025], reproduce the virtual sound).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify ‘226 to incorporate the teachings of Patil to be aware of object based on virtual sound (Patil, Abstract).
Referring to claim 6, ‘226 in view of Patil, does not explicitly disclose what’s claimed in instant application 19/031,177, wherein the lenses are detachably mounted on the frame.
However, Han (US 20160033772 A1), teaches wherein the lenses are detachably mounted on the frame ([0061], detachable lens).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify ‘226 in view of Patil to incorporate the teachings of Han to control the visual input of the device (Han, Abstract).
A nonstatutory type (35 U.S.C. 101) double patenting rejection can be overcome by amending the conflicting claims so they are no longer coextensive in scope or filing of a terminal disclaimer.
Allowable Subject Matter
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 1-4, Debates et al. (US 20180113317 A1), hereinafter Debates, discloses, considering Claim 1 as an example, a headwear configured to be worn by a first user, comprising (Abstract): a frame; one or more lenses; one or more microphones; an augmented reality ("AR") display (Fig. 3); one or more sensors to track objects in the first user's environment, wherein the sensor is selected from the group consisting of cameras, LiDAR sensors, depth sensors, infrared sensors, and any combination thereof ([0024]); an augmented reality ("AR") interface configured to visually present identifiers indicating an identity, position or other information about one or more objects ([0002]); a gaze tracking imaging element configured to monitor a gaze direction of the first user; one or more processors; and a memory storing program instructions that when executed by the one or more processors cause the one or more processors to at least determine an eye profile of the first user for gaze tracking, wherein, the eye profile includes at least one of the information regarding a position, size, range of movement of the first user eye with respect to the gaze tracking imaging element, a separation between each eye of the first user, a pupil shape of each eye of the first user, determine the focal depth of the eye view of the user by tracking at least one of the direction, vergence or dilation, in order to track where and how far the user is looking, dynamically rendering the images displayed in real-time so that near field, mid field and far field images are properly focused or unfocused to simulate their correct depth with respect to the user, performing at least one action, based on the gaze direction, wherein the action is at least one of aim, focus, dynamically increasing or decreasing the resolution, rendering quality, compression rate, data size, and clipping region for portions of the image based on their viewability and focal relevance to the user, receiving, at an augmented reality ("AR") interface of the headwear, worn by the first user, wherein, the selection can be one or more of a voice control, touch control, or based on determining a gaze direction of the first user, establishing an audio connection between the first headwear and a second headwear worn by the second user; sending, via the audio connection, a first audio data from the first headwear to the second headwear for output by the second headwear, receiving at the first headwear and via the audio connection, a second audio data sent from the second headwear; indicating on the AR interface of the first headwear that second audio data is received from the second headwear and is of the second user speaking, receiving, at the first headwear, visual information, and outputting at the first headwear the second audio data ([0016], virtual reality).
A further search was conducted which failed to yield any prior art. Therefore, the prior art fails to teach or render obvious these limitations taken within the others in the claim.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD J RAHMAN whose telephone number is (571)270-7190. The examiner can normally be reached Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571) 272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Mohammad J Rahman/Primary Examiner, Art Unit 2487