DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/20/2025 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 6, 7, 15, 16, 20, 21 are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (US 10,388,069) in view of Bradski et al. (US 2016/0026253).
Regarding claim 1, Gu teaches a method comprising: determining a location of a first augmented reality (AR) headset based at least on positional information and orientation information associated with the first AR headset (see col. 2 line 60- col. 3 line 17, Gu discusses computer eye-worn glasses containing an AR camera; see col. 7 lines 33-51, Gu discusses determining an AR/VR camera’s location and orientation);
receiving, at the first AR headset, a user input, the user input specifying a distance at which to place a digital object at a location within a field of view of the first AR headset (see col. 2 line 60- col. 3 line 17, Gu discusses computer eye-worn glasses containing an AR camera; see col. 7 lines 33-51, Gu discusses a user specifying the distance that virtual object is inserted and rendered).
Bradski teaches mobile devices that are AR headsets (see figure 4A, para. 0233, 0909, Bradski discusses various mobile computer system such as 3D AR head-mounted glasses);
determining geographical coordinates at which to place the digital object at the location within the field of view of the first AR headset based at least on the location of the first AR headset the distance, and known geographical coordinates of one or more landmarks (see para. 0872-0873, Bradski discusses user's geospatial location (e.g., provided by GPS, attitude/position sensors, etc.) or mobile location relative to the buildings, may comprise data used by the computing network of the AR system to trigger the transmission of data used to display the virtual objects);
placing the digital object at the geographical coordinates within the field of view of the first AR headset (see para. 0850, Bradski discusses a virtual object placed at a fixed position or location within a physical environment viewed by the user’s device or the virtual object is placed at a position relative to the user’s device; see para. 0898-0899, Bradski discusses rendering objects based on the user’s view of the world); and
storing the geographical coordinates of the placed digital object for later use by a second AR headset in a second field of view associated with the second AR headset or transmitting the geographical coordinates of the placed digital object to the second AR headset (see figure 55, figure 57A, figure 57B, figure 143, para. 0556, 0862, 0871, Bradski discusses a cloud server that stores and updates data. The data may be transmitted between multiple AR devices at different geographical locations. The second AR user device renders and interacts the virtual object).
Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu with Bradski to derive at the invention of claim 1. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
The determination of obviousness is predicated upon the following: One skilled in the art would have been motivated to modify Gu in this manner in order to improve virtual object rendering by transmitting the virtual object’s geographical location to each augmented reality (AR) device to properly display the virtual object at the perspective view of each AR device. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in this manner explained using known engineering design, interface and/or programming techniques, without changing a fundamental operating principle of Gu, while the teaching of Bradski continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of calculating geographical position of a rendered virtual object at a first AR device and transmitting the virtual object data to other AR devices to allow other user AR devices to view the rendered virtual object at a proper relative landmark location. The Gu and Bradski systems perform augmented reality object generation, therefore one of ordinary skill in the art would have reasonable expectation of success in the combination. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Regarding claim 2, Bradski teaches wherein the positional information comprises global positioning system (GPS) satellite signal data, cellular tower signal data, wireless internet signal data, network environment data, or combinations thereof (see para. 1214, Bradski discusses user's location may be determined through any of the localization techniques e.g., GPS, Bluetooth, topological map, map points related to the user's AR system).
The same motivation of claim 1 is applied to claim 2. Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu with Bradski to derive at the invention of claim 2. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
Regarding claim 3, Gu teaches wherein the orientation information comprises North East South West (NESW) data, gyroscope data, accelerometer data, magnetometer data, or combinations thereof (see col. 4 lines 53-62, Gu discusses gyroscope and/or an accelerometer to provide position and orientation data).
The same motivation of claim 1 is applied to claim 3. Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu with Bradski to derive at the invention of claim 3. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
Regarding claim 4, Gu teaches wherein receiving the user input comprises receiving a touch or voice input at a user interface on the first AR headset (see claim 12, col. 1 lines 59-67, Gu discusses receiving, via a user interface, an input as to a location to insert a light field object in images captured by a mobile camera).
The same motivation of claim 1 is applied to claim 4. Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu with Bradski to derive at the invention of claim 4. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
Regarding claim 6, Bradski teaches further comprising: retrieving the geographical coordinates of the digital object placed in the field of view of the first AR headset; and placing the digital object at the geographical coordinates within the second field of view of the second AR headset (see para. 1502, 1515, Bradski discusses first AR user sharing to other AR users data related to a geographical location of a placed object).
The same motivation of claim 1 is applied to claim 6. Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu with Bradski to derive at the invention of claim 6. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
Regarding claim 7, Bradski teaches wherein one of the first AR headset or the second AR headset is positioned within a room or set, wherein the other of the first AR headset or the second AR headset is positioned outside the room or set, and wherein the field of view of the first AR headset and the second field of view of the second AR headset includes the room or set (see para. 1416, Bradski discusses virtual decors for the physical room or physical space; see para. 1502, 1515, Bradski discusses first AR user sharing to other AR users data related to a geographical location of a placed object).
The same motivation of claim 1 is applied to claim 7. Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu with Bradski to derive at the invention of claim 7. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
Claim 15 is rejected as applied to claim 1 as pertaining to a corresponding system.
Claim 16 is rejected as applied to claim 1 as pertaining to a corresponding system.
Claim 20 is rejected as applied to claim 2 as pertaining to a corresponding system.
Regarding claim 21, Bradski teaches further comprising accessing map information based on the location of the first AR headset, wherein the map information comprises the known geographical coordinates of one or more landmarks (see para. 0169, 0812, Bradski discusses a map that comprises comprehensive information about the physical objects of the real world in real-time).
The same motivation of claim 1 is applied to claim 21. Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu with Bradski to derive at the invention of claim 21. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
Claims 5, 8-14, 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (US 10,388,069) in view of Bradski et al. (US 2016/0026253) in view of Maciocci et al. (US 2012/0249741).
Regarding claim 5, Gu and Bradski do not expressly disclose wherein determining the geographical coordinates at which to place the digital object is further based at least on latitude information associated with the first AR headset, longitude information associated with the first AR headset, altitude information associated with the first AR headset, or combinations thereof.
However, Maciocci teaches wherein determining the geographical coordinates at which to place the digital object is further based at least on latitude information associated with the first AR headset, longitude information associated with the first AR headset, altitude information associated with the first AR headset, or combinations thereof (see figure 16, figure 17, para. 0007, Maciocci discusses head mounted AR device; see para. 0230, Maciocci discusses location data may be coordinates in longitude, latitude and elevation; see para. 0111, Maciocci discusses the head mounted device obtains spatial data (i.e., distances to objects in the images) using a distance sensor which measures distances from the device to objects and surfaces in the image).
Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu and Bradski with Maciocci to derive at the invention of claim 5. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
The determination of obviousness is predicated upon the following: One skilled in the art would have been motivated to modify Gu and Bradski in this manner in order to improve virtual object rendering by transmitting the virtual object’s geographical location to each augmented reality (AR) device to properly display the virtual object at the perspective view of each AR device. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in this manner explained using known engineering design, interface and/or programming techniques, without changing a fundamental operating principle of Gu and Bradski, while the teaching of Maciocci continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of calculating geographical position of a rendered virtual object at a first AR device and transmitting the virtual object data to other AR devices to allow other user AR devices to view the rendered virtual object at a proper relative landmark location. The Gu, Bradski, and Maciocci systems perform augmented reality object generation, therefore one of ordinary skill in the art would have reasonable expectation of success in the combination. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Regarding claim 8, Gu and Bradski do not expressly disclose further comprising scaling the digital object placed within the second field of view of the second AR headset based at least on a comparison of positional information and orientation information between the first AR headset and the second AR headset.
However, Maciocci teaches further comprising scaling the digital object placed within the second field of view of the second AR headset based at least on a comparison of positional information and orientation information between the first AR headset and the second AR headset (see figure 3, para. 0095, Maciocci discusses second head mounted device may receive data regarding the virtual object to be rendered, such as its content and data regarding its general shape and orientation. The second head mounted device 10b may use the anchor surface selected by the first user (or another anchor surface selected by the second user) to determine a location, orientation and perspective for displaying the virtual object; see figure 19, para. 0035, 0129, Maciocci discusses head mounted device to generate a scaled three-dimensional model, the model data with location, perspective, and orientation data, and upload the model data to share the data with other devices; see figure 2, para. 0088, Maciocci discusses data transmitted from the first head mounted device to the second head mounted device may include the shape or object data. This data may enable the second head mounted device processor to render a displayed image of the virtual object corresponding to the second user's viewing perspective).
Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu and Bradski with Maciocci to derive at the invention of claim 8. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
The determination of obviousness is predicated upon the following: One skilled in the art would have been motivated to modify Gu and Bradski in this manner in order to improve virtual object rendering by transmitting the virtual object’s geographical location to each augmented reality (AR) device to properly display the virtual object at the perspective view of each AR device. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in this manner explained using known engineering design, interface and/or programming techniques, without changing a fundamental operating principle of Gu and Bradski, while the teaching of Maciocci continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of calculating geographical position of a rendered virtual object at a first AR device and transmitting the virtual object data to other AR devices to allow other user AR devices to view the rendered virtual object at a proper relative landmark location. The Gu, Bradski, and Maciocci systems perform augmented reality object generation, therefore one of ordinary skill in the art would have reasonable expectation of success in the combination. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Regarding claim 9, Gu teaches one or more non-transitory computer storage media storing computer-usable instructions that, when executed by one or more computing devices, causes the one or more computing devices to perform operations, the operations comprising: a distance manually specified set by a user input at the first AR headset at which to place the digital object at a location within the first field of view (see col. 7 lines 33-51, Gu discusses a user specifying the distance that virtual object is inserted and rendered).
Bradski teaches accessing, by a second augmented reality (AR) headset, geographical coordinates of a digital object placed by a first AR headset in a first field of view, wherein the geographical coordinates were determined, by the first AR headset at a location, based at least on the location of the first AR headset, and known geographical coordinates of one or more landmarks (see para. 0872-0873, Bradski discusses user's geospatial location (e.g., provided by GPS, attitude/position sensors, etc.) or mobile location relative to the buildings, may comprise data used by the computing network of the AR system to trigger the transmission of data used to display the virtual objects);
placing the digital object at the geographical coordinates within a second field of view of the second AR headset, wherein the location of the first AR headset is different from a second location of the second AR headset (see figure 55, figure 57A, figure 57B, figure 143, para. 0556, 0862, 0871, Bradski discusses a cloud server that stores and updates data. The data may be transmitted between multiple AR devices at geographical locations. The second AR user device renders and interacts the virtual object).
Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu with Bradski to derive at the invention of claim 9. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
The determination of obviousness is predicated upon the following: One skilled in the art would have been motivated to modify Gu in this manner in order to improve virtual object rendering by transmitting the virtual object’s geographical location to each augmented reality (AR) device to properly display the virtual object at the perspective view of each AR device. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in this manner explained using known engineering design, interface and/or programming techniques, without changing a fundamental operating principle of Gu, while the teaching of Bradski continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of calculating geographical position of a rendered virtual object at a first AR device and transmitting the virtual object data to other AR devices to allow other user AR devices to view the rendered virtual object at a proper relative landmark location. The Gu and Bradski systems perform augmented reality object generation, therefore one of ordinary skill in the art would have reasonable expectation of success in the combination. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Maciocci teaches scaling the digital object placed within the second field of view of the second AR headset based at least on positional information and orientation information associated with the second AR headset (see figure 16, figure 17, para. 0007, Maciocci discusses head mounted device; see para. 0107, Maciocci discusses second user may change the orientation, size, and shape of the virtual object; see figure 19, para. 0035, 0129, Maciocci discusses head mounted device to generate a scaled three-dimensional model, the model data with location, perspective, and orientation data, and upload the model data to share the data with other devices).
Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu and Bradski with Maciocci to derive at the invention of claim 9. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
The determination of obviousness is predicated upon the following: One skilled in the art would have been motivated to modify Gu and Bradski in this manner in order to improve virtual object rendering by transmitting the virtual object’s geographical location to each augmented reality (AR) device to properly display the virtual object at the perspective view of each AR device. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in this manner explained using known engineering design, interface and/or programming techniques, without changing a fundamental operating principle of Gu and Bradski, while the teaching of Maciocci continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of calculating geographical position of a rendered virtual object at a first AR device and transmitting the virtual object data to other AR devices to allow other user AR devices to view the rendered virtual object at a proper relative location. The Maciocci, Bradski, and Maciocci systems perform augmented reality object generation, therefore one of ordinary skill in the art would have reasonable expectation of success in the combination. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Regarding claim 10, Maciocci teaches wherein the geographical coordinates of the digital object are determined based at least on positional information and orientation information associated with the first AR headset (see figure 18, para. 0129, Maciocci discusses distance, location, perspective, and orientation data).
The same motivation of claim 9 is applied to claim 10. Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu and Bradski with Maciocci to derive at the invention of claim 10. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
Regarding claim 11, Gu teaches wherein the geographical coordinates of the digital object are determined based at least on touch or voice input provided on the first AR headset (see col. 7 lines 33-51, Gu discusses a user specifying the distance that virtual object is inserted and rendered).
The same motivation of claim 9 is applied to claim 11. Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu and Bradski with Maciocci to derive at the invention of claim 11. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
Claim 12 is rejected as applied to claim 2 as pertaining to a corresponding one or more computer storage media.
Claim 13 is rejected as applied to claim 3 as pertaining to a corresponding one or more computer storage media.
Regarding claim 14, Maciocci teaches the operations further comprising: accessing second geographical coordinates of a second digital object placed by the first AR headset in the first field of view; placing the second digital object at the second geographical coordinates within the second field of view of the second AR headset; and constructing a virtual set within the second field of view, the virtual set comprising the digital object, the second digital object, or a combination thereof, wherein one of the first AR headset or the second AR headset is positioned within the virtual set, and wherein the other of the first AR headset or the second AR headset is positioned outside the virtual set (see figure 19, para. 0035, Maciocci discusses head mounted device to generate a three-dimensional model of an area, tag the data with location data, and upload the model to share the data with other devices; see figure 2, para. 0088, Maciocci discusses data transmitted from the first head mounted device to the second head mounted device may include the shape or object data. This data may enable the second head mounted device processor to render a displayed image of the virtual object corresponding to the second user's viewing perspective; see para. 0343-0344, Maciocci discusses determining a location of the second body mounted sensor device, in which transmitting the geographical identification metadata and three dimensional map to the second body mounted sensor device).
The same motivation of claim 9 is applied to claim 14. Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu and Bradski with Maciocci to derive at the invention of claim 14. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
Regarding claim 17, Gu and Bradski do not expressly disclose further comprising the second AR headset, the second AR headset configured to: retrieve the geographical coordinates of the digital object placed in the field of view AR headset; and place the digital object at the geographical coordinates within a second field of view of the second AR headset.
Maciocci teaches further comprising the second AR headset, the second AR headset configured to: retrieve the geographical coordinates of the digital object placed in the field of view AR headset (see para. 0067, 0087-0088, 0161, 0225, Maciocci discusses obtaining virtual object data comprising geometrical model, geographic coordinate information, distance, and orientation); and
place the digital object at the geographical coordinates within a second field of view of the second AR headset (see para. 0087-0088, Maciocci discusses the first and second devices viewing the rendered virtual object in the field of view of the two devices).
Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu and Bradski with Maciocci to derive at the invention of claim 17. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
The determination of obviousness is predicated upon the following: One skilled in the art would have been motivated to modify Gu and Bradski in this manner in order to improve virtual object rendering by transmitting the virtual object’s geographical location to each augmented reality (AR) device to properly display the virtual object at the perspective view of each AR device. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in this manner explained using known engineering design, interface and/or programming techniques, without changing a fundamental operating principle of Gu and Bradski, while the teaching of Maciocci continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of calculating geographical position of a rendered virtual object at a first AR device and transmitting the virtual object data to other AR devices to allow other user AR devices to view the rendered virtual object at a proper relative location. The Maciocci, Bradski, and Maciocci systems perform augmented reality object generation, therefore one of ordinary skill in the art would have reasonable expectation of success in the combination. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Regarding claim 18, Bradski teaches wherein one of the first AR headset or the second AR headset is positioned within boundaries of a real world environment, wherein the other of the first AR headset or the second AR headset is positioned outside the boundaries of the real world environment, and wherein the digital object is placed within the boundaries of the real-world environment (see para. 1416, Bradski discusses virtual decors for the physical room or physical space; see para. 1502, 1515, Bradski discusses first AR user sharing to other AR users data related to a geographical location of a placed object).
The same motivation of claim 17 is applied to claim 18. Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu and Bradski with Maciocci to derive at the invention of claim 18. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
Regarding claim 19, Maciocci teaches wherein the second AR headset is further configured to scale the digital object placed within the second field of view based at least on second positional information and second orientation information associated with the second AR headset (see para. 0107, Maciocci discusses second user may change the orientation, size, and shape of the virtual object; see para. 0152, Maciocci discusses the first head mounted device may transmit data regarding the virtual object to the second head mounted device in an orientation based on the second user's orientation).
The same motivation of claim 17 is applied to claim 19. Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Gu and Bradski with Maciocci to derive at the invention of claim 19. The result would have been expected, routine, and predictable in order to perform virtual object rendering across multiple devices.
Conclusion
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNY A CESE whose telephone number is (571) 270-1896. The examiner can normally be reached on Monday – Friday, 9am – 4pm.
If attempts to reach the primary examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Kenny A Cese/
Primary Examiner, Art Unit 2663