Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed July 30th, 2025 has been entered. Claims 1, 3, 11, 13 and 17 have been amended. Claims 1-20 remain rejected in the application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 11-14 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Wan et al. (Pub. No.: US 2022/0084256 A1), hereinafter Wan, in view of Shroff et al. (Pub. No.: US 2020/0003897 A1), hereinafter Shroff, and Davis et al. (Pub. No.: US 2019/0121522 A1), hereinafter Davis, and Meyer et al. (Pub. No.: US 2023/0384894 A1), hereinafter Meyer and Hariton (Pub. No.: US 2020/0110560 A1).
Regarding claim 1, Wan discloses an augmented reality controller for conveying content to an augmented reality device in communication with a vehicle (paragraph 24 teaches an automotive computer and a vehicle augmented reality platform), the augmented reality controller comprising:
at least one memory device storing computer program code (paragraph 54 teaches an independent memory having program instructions);
one or more processors communicatively coupled to the at least one memory device (Fig. 2 and paragraphs 33, 54 and 56 teach one or more processor(s) and a memory communicatively coupled to an independent memory having program instructions and a data collection and processing module) and configured, when executing the computer program code to:
estimate a location of the augmented reality device (paragraphs 13 and 27 teach that a system may synchronize a vehicle’s coordinate system, so that the vehicles AR platform may track the position of an AR device). However, Wan fails to disclose with respect to one or more known areas of a grid.
Shroff discloses with respect to one or more known areas of a grid (Fig. 5 and paragraphs 15 and 48 teach that a plurality of uniformly sized grids are provided to the vehicle based on the vehicle’s current location and that the distance component can determine the distance based at least in part of the region falling within an interior region of a map grid). When applying this know technique to Wan, it would have been obvious to a person having ordinary skill in the art to estimate the location and generate content based on one or more known areas of a grid in order to reduce the amount of map data that needs to be transmitted to a vehicle at a given time.
Thereby, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified Wan to incorporate the teachings of Shroff to provide an augmented reality controller with improved location estimation by taking into account its location with respect to one or more known areas of a grid.
In addition, Wan in view of Shroff determine an orientation of a field-of-view of the augmented reality device (paragraph 55 of Wan teaches to orient them with individual devices). However, Wan in view of Shroff fail to disclose receive user selection of a selected content type from among a plurality of selectable content types, wherein the selected content type indicates a current preference of an individual.
Davis discloses receive user selection of a selected content type from among a plurality of selectable content types, wherein the selected content type indicates a current preference of an individual (Figs. 21A/B and Figs. 37A/B and paragraph 761 teaches that the AGUI display system may also adapt graphic content based on the relational position of one or more viewing parties in a mapped environment. This can be based on the preferences of one or more identified viewing parties so that each viewer may view content specific to user preferences or specific to the device or object on which the graphic content or interface is assigned or displayed). Since Wan in view of Shroff teach the initial augmented reality controller with the ability to estimate locations with respect to one or more know areas of a grid and determine an orientation of a field-of-view of an augmented reality device and Davis teaches receiving a user selection of content type from among a plurality of content and where it is based on user preferences, it would have been obvious to a person having ordinary skill in the art to combine the features together so that while viewing the augmented reality within a vehicle, the user would also have the ability to select from a plurality of different content types based around their preferences.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wan in view of Shroff to incorporate the teachings of Davis, so that the combined features together would present the user with better detailed preferences and selections to choose from when selecting content.
However, Wan in view of Shroff and Davis fail to disclose receive a user-selected type-of-travel context from among a plurality of type-of-travel contexts, the plurality of type-of-travel contexts comprising a first type-of-travel context associated with the individual walking and a second type-of-travel context associated with the individual engaging in vehicle travel.
Meyer discloses a type-of-travel context from among a plurality of type-of-travel contexts, the plurality of type-of-travel contexts comprising a first type-of-travel context associated with the individual walking and a second type-of-travel context associated with the individual engaging in vehicle travel (Paragraph 72 teaches that one example factor that may determine what input-location correction model is selected is a usage context of the device. A usage context may refer to an environment or context in which a device may be used, and for which a particular input-location correction model may be associated. For example, a first usage context of a device may correspond to usage of the device in a vehicle, a second usage context may correspond to usage of the device while walking, and a third usage context may correspond to usage of the device on a train.). Since Wan in view of Shroff and Davis teach of a augmented reality controller with the capabilities of estimating locations, determining orientations and receiving a user’s selection for a selected content type from a plurality of content types and preferences and Meyer teaches a device with the capabilities of determining a usage context in relation to a plurality of different contexts including a first usage context associated with a vehicle and a second usage context associated with walking, it would have been obvious to a person having ordinary skill in the art to combine the features together so that any of the selected content would be in association with the usage context including how the individual was traveling, either by driving in a vehicle or walking on foot.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wan in view of Shroff and Davis to incorporate the teachings of Meyer, so that the combined features together would provide the user with improved content selection and preferences by incorporating the usage context based around how exactly the individual was traveling.
However, Wan in view of Shroff, Davis and Meyer fail to disclose that the type-of-travel context is received by a user-selection.
Hariton discloses receive a user-selected type-of-travel context (Paragraph 43 teaches that in some implementations, the user input may be received via a user device (e.g., via a user interface provided by user interface component 118), display device 140, and/or other device connected to system 100. In some implementations, the user input may be provided to system 100 via a user device, display device 140, and/or other device connected to system 100. In various implementations, user input received via user interface component 118 may comprise user input related to virtual content displayed in an augmented reality environment. Additionally, paragraph 34 teaches that the exemplary display 300 may comprise marker 306 and an image of virtual content 304. Exemplary display 300 may include virtual content 304 depicting an automobile... As display device 140 moves, image generation component 114 may be configured to automatically generate a new image based on the user's current field of view.) Since Wan in view of Shroff, Davis and Meyer teach the initial functionality of an augmented reality controller that can utilize and distinguish between different type-of-travel contexts associated with moving such as walking or traveling in a vehicle and Hariton teaches a function for an augmented reality device to receive user-selections that can modify the content being displayed within an augmented reality environment and can similarly change what image is being displayed to the user as the user travels around, it would have been obvious to a person having ordinary skill in the art to combine the features together so that any of the type-of-travel contexts could be selectable by a user and then the device would be able to receive the user-selection and then change the context viewed by the user in an augmented reality environment, as that viewer traveled, either by driving in a vehicle or walking on foot.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wan in view of Shroff, Davis and Meyer to incorporate the teachings of Hariton, so that the combined features together would provide the user with an easier way of making type-of-travel context selections and allow for a user to be able to select their preferred way of viewing type-of-travel context based around how exactly the individual was traveling.
In addition, Wan in view of Shroff, Davis, Meyer and Hariton disclose and transmit, via the vehicle, context-sensitive content to the augmented reality device for display thereon, the context-sensitive content being associated with one or more real objects viewable with the field of view of the augmented reality device, and being generated based, at least in part, on: the one or more known areas of the grid, the selected content type, and the user-selected type-of-travel context, wherein at least a portion of the context-sensitive content is different for the first type-of-travel context associated with walking than for the second type-of-travel context associated with vehicle travel (Fig. 6 step 645 and paragraph 72 of Wan teach a display of context-sensitive content. Fig. 5 and paragraphs 15 and 48 of Shroff teach that a plurality of uniformly sized grids are provided to the vehicle based on the vehicle’s current location and that the distance component can determine the distance based at least in part of the region falling within an interior region of a map grid. Paragraph 95 of Davis teaches that the display content can be selected from the group consisting of a graphical user interface, interactive media content, icons, images, video, text, interactive 3D objects, environments, buttons, and control affordances and paragraph 101 of Meyer teaches that because the way in which a device and an input member move in these usage contexts may differ, different input-location correction models may be associated with each usage context, and the particular input-location correction model that is selected may be based on the usage context of the device. For example, the device may detect a usage context corresponding to use of the device in a car and may therefore select an input-location correction model that is configured for road vehicle travel. As another example, the device may detect a usage context corresponding to bicycle travel, walking, jogging, train travel, airplane travel, or the like).
Regarding claim 2, Wan in view of Shroff, Davis, Meyer and Hariton disclose everything claimed as applied above (see claim 1), in addition, Wan in view of Shroff, Davis and Meyer discloses one or more processors communicatively coupled to the at least one memory device being additionally configured to:
access a cloud-based data storage device to determine the context-sensitive content based, at least in part, on the current preference of the individual, wherein the individual is currently in the vehicle or satisfies a predefined proximity condition relative to the vehicle (Fig. 5 and paragraph 64 of Wan teaches that the AR expression generator 230 may also be used to create AR image content from one or more users of AR devices 145, within or outside of the vehicle, via a user interface 244).
Regarding claim 3, Wan in view of Shroff, Davis, Meyer and Hariton disclose everything claimed as applied above (see claim 1), in addition, Wan in view of Shroff, Davis and Meyer disclose wherein the second type-of-travel context indicates whether the individual has a vehicle operator role or a vehicle passenger role (paragraphs 2 and 68 teach that AR may assist non-driving users and that the projections rendered upon are based on a user’s preferences, permissions and whether they are a driver or passenger).
Regarding claim 4, Wan in view of Shroff, Davis, Meyer and Hariton disclose everything claimed as applied above (see claim 3), in addition, Wan in view of Shroff, Davis and Meyer disclose wherein, in response to the type-of-travel context being the second type-of-travel context, the one or more processors communicatively coupled to the at least one memory device are additionally configured to:
transmit to the augmented reality device, via the vehicle, the context-sensitive content base, at least in part, on the one or more real objects being located along a driving path of travel of the vehicle (paragraph 12 of Wan teaches that road-side information will be displayed to a user’s field of vision when they are driving a vehicle).
Regarding claim 11, the method steps correspond to and are rejected the same as the controller of claim 1 (see claim 1 above).
Regarding claim 12, the method steps correspond to and are rejected the same as the controller of claim 2 (see claim 2 above).
Regarding claim 13, the method steps correspond to and are rejected the same as the controller of claim 3 (see claim 3 above).
Regarding claim 14, the method steps correspond to and are rejected the same as the controller of claim 4 (see claim 4 above).
Regarding claim 17, a non-transitory computer-readable medium, corresponds to and is rejected the same as the controller of claim 1 (additionally, Wan discloses in paragraph 36 a computer-readable memory storing program instructions)
Regarding claim 18, a non-transitory computer-readable medium, corresponds to and is rejected the same as the controller of claim 2 (see claim 2 above).
Regarding claim 19, a non-transitory computer-readable medium, corresponds to and is rejected the same as the controller of claim 3 (see claim 3 above).
Claims 5 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Wan in view of Shroff, Davis and Meyer and further in view of Brown et al. (Pub. No.: US 2023/0215106 A1), hereinafter Brown.
Regarding claim 5, Wan in view of Shroff, Davis, Meyer and Hariton disclose everything claimed as applied above (see claim 3 above), in addition, Wan in view of Shroff, Davis, Meyer and Hariton disclose wherein, responsive to the type-of-travel context being the first type-of-travel context, the one or more processors communicatively coupled to the at least one memory device is additionally configured to:
transmit to the augmented reality device, via the vehicle, context-sensitive content based, at least in part, on the one or more real objects (Paragraphs 14 and 73 of Wan teach that a user may use the vehicle AR system or their own AR device, even after the occupants have exited the vehicle). However, Wan in view of Shroff, Davis, Meyer and Hariton fail to disclose the objects being located along a walking path of travel.
Brown discloses being located along a walking path of travel (Fig. 9 and paragraphs 101 teach a display and device that a user can use while walking along a pathway).
When combining Brown and Wan’s teachings, it would have been obvious to a person of ordinary skill in the art to modify the teachings of walking along a path of travel with other similar teachings, such as how Wan taught continued use of the device even after exiting the vehicle and Brown taught while walking along a pathway. This would produce the same results of allowing a user continued access to the context-sensitive content via the vehicle, while not having to be in the vehicle, such as along a walking path of travel. In doing so, this would improve the user’s experience and help them in reaching their destination, even to locations where a car cannot possibly travel to.
Regarding claim 9, Wan in view of Shroff, Davis, Meyer and Hariton disclose everything claimed as applied above (see claim 1). However, Wan in view of Shroff, Davis, Meyer and Hariton fail to disclose being additionally configured to, determine whether the one or more real objects viewable within the field-of-view of the augmented reality device is located within a peripheral portion of the field-of-view of the augmented reality device, or is within a central portion of the filed-of-view of the augmented reality device, wherein the central portion of the field-of-view is a portion within a first predefined angle relative to a central axis of the field-of-view, and wherein the peripheral portion is another portion that is outside the central portion of the field-of-view.
Brown discloses being additionally configured to, determine whether the one or more real objects viewable within the field-of-view of the augmented reality device is located within a peripheral portion of the field-of-view of the augmented reality device, or is within a central portion of the filed-of-view of the augmented reality device, wherein the central portion of the field-of-view is a portion within a first predefined angle relative to a central axis of the field-of-view, and wherein the peripheral portion is another portion that is outside the central portion of the field-of-view (Fig. 10 and paragraphs 28 and 120 teach that an AR wearable device can not only centralize and display telemetry data and navigation information into the user’s field of view, but can detect physical obstacles or virtual objects to modulate the performance and if something is not in the field of view of the device, a virtual reality effect will prompt the user that something is out-of-view).
When applying this known technique from Brown with the teachings of Wan in view of Shroff, Davis, Meyer and Hariton, it would have been obvious to a person of ordinary skill in the art to combine the teachings to see that AR wearable devices already perform certain functions like centralizing and displaying navigational information into a user’s field-of-view, as well as the ability to detect physical and virtual objects.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Wan in view of Shroff, Davis, Meyer and Hariton as applied to claims 1 and 11 above, and further in view of Svanegaard et al. (Pub No.: US 2021/0000634 A1), hereinafter Svanegaard.
Regarding Claim 6, Wan in view of Shroff, Davis, Meyer and Hariton disclose everything claimed as applied above (see claim 1). However, Wan in view of Shroff, Davis, Meyer and Hariton fail to disclose wherein the one or more processors communicatively couple to the at least one memory device are configured to perform the estimate of the location of the augmented reality device by;
receiving a selection of at least one area of the one or more known areas of the grid from among a plurality of uniformly-sized areas within a predefined proximity to the estimated location of the vehicle.
Svanegaard discloses a selection of at least one area of the one or more known areas (paragraph 260 teaches that a display may display the accessory device’s current location and a map so that the user can select his/her current location).
When applying Svanegaard’s known technique of allowing a user to select their estimated location on a map to Wan in view of Shroff, Davis, Meyer and Hariton, it would have been apparent that by a user selecting their location on a map comprising a plurality of uniformly size grid areas within a predetermined proximity of the vehicle (Shroff in figure 5 and at paragraph 15 teaches that a plurality of uniformly sized grids are provided to the vehicle based on the vehicle’s current location), the controller would be receiving a selection of one of the grid areas, which is selected by the user. This would allow the user to view nearby areas or regions in proximity to their vehicle’s location and would also inform them of where they are approximately in relation to that nearby area.
Therefore, it would have been obvious to a person of ordinary skill in the art to modify Wan in view of Shroff, Davis, Meyer and Hariton and incorporate the teachings of Svanegaard to effectively design a mapping system in a uniformly-sized grid like pattern that would allow a user to select an area of a grid to effectively show where their location currently is.
Regarding claim 15, the method steps correspond to and are rejected the same as the augmented reality controller steps of claim 6 (see claim 6 above).
Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wan in view of Shroff, Davis, Meyer, Hariton and Svanegaard as applied to claims 6 and 15 above, and further in view of Ren et al. (Pub. No.: US 2021/0116914 A1), hereinafter Ren.
Regarding claim 7, Wan in view of Shroff, Davis, Meyer, Hariton and Svanegaard disclose everything claimed as applied above (see claim 6). However, Wan in view of Shroff, Davis, Meyer, Hariton and Svanegaard fail to disclose wherein the plurality of uniformly-sized areas correspond to areas of approximately 3.0 m by approximately 3.0 m.
Ren discloses wherein the plurality of uniformly-sized areas correspond to areas of approximately 3.0 m by approximately 3.0 m. (paragraph 122 teaches that based on the initial position of a vehicle, each image tile may be of a certain size, such as a side length of 3 meters).
Shroff teaches in paragraph 15 that a plurality of map tiles can be loaded into memory for localizing a vehicle and these map tiles could represent a 25-meter by 25-meter region in a 3x3 block. If a person skilled in the art were to substitute the 3-meter by 3-meter grid tile size taught by Ren for the 25-meter by 25-meter grid tile size taught by Shroff, the results would have been predictable because while they may lead to generating smaller or larger sized map grid tiles, the proximity of the location would still be within similar areas of the map, due to the sized grids being either increased or decreased uniformly with one another.
Therefore, it would have been obvious to a person skilled in the art to substitute 3.0 m by 3.0 m to a plurality of uniformly-sized areas and achieve the predictable result of reducing an amount of memory to be allocated to storing the map data in memory, while maintaining an accuracy or improving an accuracy of localizing the vehicle in an environment.
Regarding claim 16, the method steps correspond to and are rejected the same as the augmented reality controller steps of claim 7 (see claim 7 above).
Claims 8 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wan in view of Shroff, Davis, Meyer and Hariton as applied to claims 1 and 17 above, and further in view of Asghar et al. (U.S. Patent No: 11562550), hereinafter Asghar.
Regarding claim 8, Wan in view of Shroff, Davis, Meyer and Hariton disclose everything claimed as applied above (see claim 1). However, Wan in view of Shroff, Davis, Meyer and Hariton fail to disclose the one or more processors communicatively coupled to the at least one memory device being additionally configured to:
determine whether the one or more real objects viewable within the field-of-view of the augmented reality device satisfies a predefined proximity condition in which the one or more real objects are considered relatively nearby to the individual being co-located with the augmented reality device, or whether the one or more real objects satisfies a predefined remote condition in which the one or more real objects are considered relatively distant from the individual, wherein the context-sensitive content is generated based on whether the one or more real objects satisfies the predefined proximity condition or whether the one or more real objects satisfies the predefined remote condition.
Asghar discloses determine whether the one or more real objects viewable withing the field-of-view of the augmented reality device satisfies a predefined proximity condition in which the one or more real objects are considered relatively nearby to an individual co-located with the augmented reality device (col 25 lines 7-15 teach about messages that will alert the driver about vehicle related events, such as nearby pedestrians/vehicles, within the threshold proximity of the vehicle), or whether the one or more real objects satisfies a predefined remote condition in which the one or more real objects are considered relatively distant from the individual, wherein the context-sensitive content is generated based on whether the one or more real objects satisfies the predefined proximity condition or whether the one or more real objects satisfies the predefined remote condition (col 25 lines 16-20 teach about messages alerting the driver about toll booths that are within certain distances, such as 1 mile away or other distances of any measured unit).
It would have been obvious to one of ordinary skill in the art to have modified Wan in view of Shroff, Davis, Meyer and Hariton to incorporate the teachings of Asghar to provide the augmented reality controller with the capabilities to alert the user of nearby or distant real-world object(s), depending on the proximity condition to the vehicle. Doing so would provide the user with detailed messages conveying information about the proximity distance of real-world objects, in relation to the vehicle, like Asghar teaches.
Regarding claim 20, a non-transitory computer-readable medium, corresponds to and is analyzed the same as the controller in claim 8 (see claim 8 above).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Wan in view of Shroff, Davis, Meyer and Hariton and further in view of Rao et al. (PUB NO.: US 2018/0373350 A1), hereinafter Rao.
Regarding claim 10, Wan in view of Shroff, Davis, Meyer and Hariton disclose everything claimed as applied above (see claim 1), in addition, Wan in view of Shroff, Davis, Meyer and Hariton disclose one or more processors communicatively coupled to the at least one memory device being additionally configured to:
receive, from a user interface of the vehicle, a user selection to display the context sensitive content utilizing alphanumeric characters, graphical icons, or a combination thereof (paragraph 66 of Wan discloses user interface 244 is used to provide information regarding AR devices and user data and paragraphs 44, 47 and 65 teach that occupant-specific information, including their animation preferences, may include animated logos of POI objects). However, Wan in view of Shroff, Davis, Meyer and Hariton fail to disclose that the user interface is graphical.
Rao teaches to receive a selection from a graphical user interface of the vehicle (paragraph 19 teaches using the in-vehicle computing system to make adjustments based on user input received directly via touch screen or other external devices). When applying this known technique to Wan in view of Shroff, Davis, Meyer and Hariton it would have been obvious to a person having ordinary skill in the art to provide the augmented reality controller with the capabilities to receive a selection from the graphical user interface of the vehicle and this configuration would allow for the use of a single actuator in the system.
Thereby, it would have been obvious to one of ordinary skill in the art to have modified Wan in view of Shroff, Davis, Meyer and Hariton to incorporate the teachings of Rao to allow for the augmented reality controller to have the capabilities to receive a selection from a graphical user interface of the vehicle.
Response to Arguments
Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
In response to applicant’s arguments to independent claim 1 that correlates to the newly amended emphasized limitations of “receive a user-selected type-of-travel context from among a plurality of type-of-travel contexts, the plurality of type-of-travel contexts comprising a first type-of-travel context associated with the individual walking and a second type-of-travel context associated with the individual engaging in vehicle travel”, the newly added limitations from the amendment is now rejected when considering previous prior arts of Wan, Shroff , Davis and Meyer along with newly added prior art of Hariton (see claim 1 above). Therefore, the combination of Wan, Shroff, Davis and Meyer, along with the newly added prior art of Hariton, appear to be capable of performing the intended use set forth in claim 1.
Regarding arguments to independent claims 11 and 17, since they are similarly amended to independent claim 1 and recite similar features, the method and non-transitory computer-readable medium of Wan in view of Shroff, Davis, Meyer and Hariton also appear to be capable of performing the intended use set forth in claims 11 and 17, respectively.
Lastly, the dependent claims 2-10, 12-16 and 18-20, each depend directly or indirectly to the independent claims above and are all rejected based on the comments directed towards the independent claims above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to George Renze whose telephone number is (703)756-5811. The examiner can normally be reached Monday-Friday 9:00am - 6:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/G.R./Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613