DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments regarding the drawing objections and 112(a) rejection filed 30 December 2025 have been fully considered but they are not persuasive.
Applicant argues that the specification amendments filed 30 December 2025 and existing drawings are now sufficient to convey the structure and methodology of the claimed invention to a person of ordinary skill in the art such that 112(a) is satisfied and amended drawings are not needed.
In response, the specification amendments upon which Applicant relies are new matter and must be cancelled in the reply to this Office Action.
Applicant’s arguments with respect to prior art rejections have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
In particular, Applicant argues that Zeng does not disclose any concept where both the HMD and the tracker track the same object independently to obtain respective tracking results, determine a converting relationship between coordinate systems based on those results or transform the tracker’s sensor data based on such a converting relationship. A similar argument is made as to Frantz. The newly applied Liao reference discloses precisely these functions as mapped below in the prior art rejections.
Drawings
The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the following elements must be shown or the feature(s) canceled from the claim(s). No new matter should be entered.
The drawings are objected to under 37 CFR 1.83(a) because they fail to show a complete system or the methodologies used to determine the first pose, first reference pose and second reference pose as described in the specification and recited in the claims. See the 112(a) rejection below which explains various aspects of insufficient disclosure which is also reflected in the dearth of drawings filed with the instant specification. Moreover, the drawings also lack illustrations showing algorithms for supporting the external tracking device 41 and its purported ability to determine the pose of the target object or the reference object (first pose and first reference pose as claimed are received from the external tracking device but no drawing illustrates the determination of these poses which must be accomplished before the host can receive such information).
Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). For example, the various embodiments refer to a variety of tracking techniques, e.g. outside-in and/or inside-out using beacons and markers, but the spatial relationship of these elements for each of the various embodiments is unclear and dearly wanting of illustration for a proper understanding of how these various different tracking technologies work in relation to the tracked object, reference object, and target object.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The amendment filed 30 December 2025 is objected to under 35 U.S.C. 132(a) because it introduces new matter into the disclosure. 35 U.S.C. 132(a) states that no amendment shall introduce new matter into the disclosure of the invention. The added material which is not supported by the original disclosure is as follows:
The amendments to [0003] add new matter regarding the external tracking device including OptiTrack, systems and HTC Vive Tracker which were not described in the original application. See also the related 112(a) regarding the lack of disclosure upon filing for this external tracking device.
The amendments to [0028] add new matter regarding the pose detection algorithm external tracking device including 2D or 3D pose estimation techniques based on skeletal detection, OpenPose and MediaPipe.
The amendments to [0031] adding new matter regarding inside-out and outside-in tracking using visual sensors mounted on a headset while noting that no such visual sensor location was specified in the original specification. Indeed, nowhere in the specification is any detail provided as to any hardware that may be used to construct this “external tracking device 41” and the location for the external tracking device 41 is not illustrated as being mounted on a headset but instead on a user. See Fig. 2 copied below in the 112(a) rejection.
[0032] locates cameras on the host 200 but the original specification does not specify this location and is contrary to Fig. 4.
[0040] now specifies control of optical and infrared sensors but such control functionality and the use of infrared sensors was not mentioned in the original specification.
[0040] adding new matter regarding beacons and lighthouse stations as including commercial VR platforms (e.g. HTC Vive).
Applicant is required to cancel the new matter in the reply to this Office Action.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
In general, the specification focuses almost exclusively upon the “host” and fails to provide an adequate written description of the other devices and methods necessary for a complete system that performs the claimed pose calibrating method, apparatus and computer-readable medium. Indeed, the only apparatus or hardware disclosed is the host 200 shown in the high-level sketch of Fig. 4 (copied below) which may be embodied as an HMD (head mounted display) 200 as shown in Fig. 3 (also copied below) and the “external tracking device 41”.
PNG
media_image1.png
386
442
media_image1.png
Greyscale
PNG
media_image2.png
418
790
media_image2.png
Greyscale
[AltContent: textbox (Fig. 4)]
As to the external tracking device 41, the instant specification merely mentions some unclear and unsupported functionality while failing to offer any structural detail for this element including any location other than that illustrated in Fig. 4. Indeed, [0024] of the published application discloses the device 41 in terms of being “used to capture images of the at least one target object and accordingly determine, for example, the first pose of each of the target object [sic]”. Nowhere in the specification is any detail provided as to any hardware that may be used to construct this “external tracking device 41”.
There is also a distinct lack of algorithms to support the external tracking device 41 and it purported ability to determine the pose of the target object or the reference object (first pose and first reference pose as claimed are received from the external tracking device as claimed but the specification fails to adequately support how the device 41 determines these poses).
It is recognized that [0029] indicates that device 41 “may use some existing pose detection algorithms to determine/track the first pose of each joint of the user 499” but no such algorithm is sufficiently identified let alone disclosed in any detail whatsoever except in Applicant’s belated attempt to do so in the form of new matter as discussed above.
The specification also mentions, [0031] that “a tracker (or multiple trackers) can be disposed on the user 499, and the external tracking device 41 can be also used to determine/track the pose of the tracker via, for example, inside-out tracking mechanism and/or outside-in tracking mechanism”. No details are provided for these “tracking mechanisms”.
Moreover, even if such “inside-out” or “outside-in” tracking is conventional the various options for its use are simply listed as including a single tracker, plural trackers, wearable device or “disposed at a place that be observed by both of the host 200 and the external tracking device 41” but none of these various options are described in sufficient detail. For each type of tracking, what specific apparatus elements are used, how are they physically and logically arranged relative to the host and what particular methodology determines the various poses such that the determined poses may be received by the host? The drawings also fail to illustrate any such devices and the lack of an illustrated spatial relationship thus further confuses the issues.
In addition, the claimed “tracking, by the host, a second reference pose of the reference object” is not described in sufficient detail. This step is illustrated in Fig. 3, step S330. In [0039] it is simply stated, without providing any detail, that “the processor 204 may directly determine the pose of the HMD (which can be regarded as a host pose of the host 200) as the second reference pose of the reference object”. It is unclear how this pose may be “directly determined”. In addition, the specification as filed does not support a cogent system that receives a first reference head pose tracked by an external tracking device and tracking a second reference pose of the HMD using SLAM; nor is the source of data for tracking the second reference pose of the HMD clearly disclosed in the original specification.
This same paragraph [0039] also mentions that “processor 204 may track the pose of the HMD via performing, for example, Simultaneous Localization and Mapping (SLAM). Although SLAM is conventional in and of itself, SLAM more precisely covers a variety of techniques including monocular SLAM, stereo SLAM, and RGB-D SLAM each having different input data. Note that the instant specification merely discloses, in [0039], that the processor 204 “directly determines the pose of the HMD using SLAM”. No data source whatsoever is specified for the SLAM algorithm in the instant specification; as such it is wholly unclear and undisclosed as to how the processor 204 directly determines pose of the HMD via an unspecified variant of SLAM using a wholly unspecified and undisclosed data source.
Moreover, even if the device 41 is inferred as including a camera and even if the image from this inferred camera is somehow used by the processor 204 such an inference would still not provide an adequate disclosure because each of the SLAM variants requires multiple images from different perspectives. In other words, monocular SLAM compares features between consecutive image frames taken from different perspectives and stereo SLAM employs a pair of images from two cameras. No such details are offered by the instant specification further deepening the mystery as to how the second reference pose is tracked by the host.
Par. [0040] offers another inadequate disclosure lacking crucial details by stating that “In another embodiment, the processor 204 may obtain the pose of the HMD by using the outside-in tracking mechanism. That is, the processor 204 may receive the beacons from the lighthouse base stations nearby and accordingly determine the pose of the HMD.” What are these “beacons” and “lighthouses? Where are they shown in the drawings? No other hardware element, besides the host processor, is disclosed as implementing this method but such lone operation by a processor has not been adequately disclosed in the specification as filed.
In particular, what element of the processor 204 is capable of receiving light beacons? Microprocessors are not conventionally capable of receiving a light signal from a beacon. Still further, what are the algorithmic details of outside-in tracking? None have been provided in the original specification leaving those of ordinary skill in the art to guess as to how this is done. Similar questions are raised for the “inside-out” tracking mechanism and for the purported combination of both outside-in and inside-out tracking and the application of such tracking to handheld controller in [0041].
Still further the disclosure fails to present a single cogent system; instead, a laundry list of poorly disclosed techniques is offered in the form of various different embodiments. In other words, it is unclear how any of the various methods of determining pose work in a full system to ultimately determine the converting relationship that is used to convert this first pose of the target object into a corresponding second pose.
In sum, the specification does not evidence possession of the claimed invention and one of ordinary skill in the art would not be apprised as to how to make and use the claimed invention.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Liao (US 2022/0066547 A1) in view of Zeng {Q, Zheng G, Liu Q (8 March 2022) PE-DLS: a novel method for performing real-time full-body motion reconstruction in VR based on vive trackers. Virtual Real. https:// doi. org/ 10. 1007/s10055- 022- 00635-5} and/or Frantz (US 2024/0062387 A1).
Claim 1
In regards to claim 1, Liao discloses a pose calibrating method {see title, abstract and cites below}, comprising:
receiving, by a host, a first pose of each of at least one target object from an external tracking device using a first coordinate system, wherein the at least one target object comprises at least one first joint on a user body tracked by the external tracking device, and the first pose of each first joint is a user joint pose in the first coordinate system, wherein the host is a head- mounted display (HMD), the external tracking device is an external camera tracking device
{Fig. 2 illustrates host HMD 120. For external tracking device see Fig. 4 below while noting that [0037] specifically disclosing locating the sensor (camera) on the tracker 140 itself which is conclusively an external location for the external tracking device 140. For tracking see outside-in tracking function TR2 tracking pose data of the HMD 120 and thus the user’s head located within the HMD120 as per [0033]-[0037]};
receiving, by the host, a first reference pose of a reference object tracked by the external tracking device from the external tracking device, wherein the reference object is a head of the user body, and the first reference pose is a pose of the head;
{external tracking device 140 tracks a reference object (head of user’s body to provide a first reference pose of the head. For tracking see outside-in tracking function TR2 tracking pose data of the HMD 120 and thus the user’s head located within the HMD120 as per [0033]-[0037]}
PNG
media_image3.png
722
714
media_image3.png
Greyscale
};
tracking, by the host, a second reference pose of the reference object by using Simultaneous Localization and Mapping (SLAM), wherein the host uses a second coordinate system, wherein the second reference pose is a pose of the HMD
{see Fig. 1, [0031]-[0034] including SLAM for tracking the head-mounted device 120 using inside out tracking wherein the second reference pose is a second pose of the HMD reference object/user’s head in a second coordinate system (of the camera and/or HMD 120 to which the camera is mounted};
determining, by the host, a converting relationship between the first reference pose and the second reference pose, wherein the converting relationship characterizes a relative position between the first coordinate used by the external tracking device and the second coordinate system used by the host;
{see Figs. 2-3, [0042]-[0055] including transformation relationship TRAN};
PNG
media_image4.png
658
794
media_image4.png
Greyscale
and
converting, by the host, the first pose of each of the at least one target object into a corresponding second pose of each of the at least one target object based on the converting relationship and accordingly providing a visual content of a reality service, wherein the reality service is at least one of a virtual reality (VR) service, an augmented reality (AR) service, a mixed reality (MR) service, or an extended reality (XR) service, wherein the second pose of each target object is the corresponding user joint pose in the second coordinate system,
{Figs. 2, 7, 8 including pose transformer 164, Fig. 3, transform first pose data in inside-out coordinate system into third pose data in the outside-in coordinate system and S250 apply to determine a device pose. As to providing a visual content of a reality service VR, AR etc. see [0002]-[0004], [0024]-[0026], [0062]-[0064] },
Zeng is a highly analogous reference teaching many of the base elements of the invention including a pose calibrating method {see title, abstract and cites below relating to full-body motion capture (MoCap) involving pose tracking and reconstruction of the human body as an avatar (visual content) within a virtual reality environment/service}, comprising:
receiving, by a host, a first pose of each of at least one target object from an tracking device {See Section 2.1 MoCap systems, 3.1 Data acquisition including HTC Vive trackers};
receiving, by the host, a first reference pose of a reference object from the external tracking device {See Section 2.1 MoCap systems, 3.1 Data acquisition including HTC Vive trackers, 3.2 Calibration including user initial position (first reference T-pose) for avatar calibration};
tracking, by the host, a second reference pose of the reference object {See Section 2.1 MoCap systems, Fig. 1, 3.1 Data acquisition including HTC Vive trackers, 3.2 Calibration including tracked position of the user via the Vive trackers for updating pose of the corresponding avatar};
determining, by the host, a converting relationship between the first reference pose and the second reference pose {Fig. 1, Section 3.2 including equations (1) and (2) that determines a calibrated converting relationship between the tracked user and the corresponding avatar including position and translation vectors (quaternions)}; and
converting, by the host, the first pose of each of the at least one target object into a corresponding second pose based on the converting relationship and accordingly providing a visual content of a reality service {Fig. 1, Section 3.2 including equations (1) and (2) and section 3.3 Full-body motion reconstruction}.
Zeng also teaches wherein providing the visual content of the reality service comprises:
determining at least one second joint on an avatar based on the corresponding second pose of each of the target object, wherein the at least one second joint one-to-one corresponds to the at least one target object, and the avatar corresponds to the user body
{the BRI of “joint” includes an entire body part such as a head as per [0035] of the instant specification. Zeng tracks and determines 19 joints as illustrated in Fig. 3, and Data Acquisition section 3.1. Further in regards to the avatar, see section 3.3 Full-body motion reconstruction that determines avatar joins based on the tracked/determined joints of the human such that the avatar corresponds to the target body (human body being tracked)}; and
showing the avatar in the visual content of the reality service, wherein each of the at least one second joint on the avatar is shown with the corresponding second pose of each of the target object {see Fig. 1 including full-body motion avatar output of the system in which the avatar mirrors the full body motion of the tracked human body using a mapping process including mapping the avatar based on Liao’s pose transformations}.
Frantz is also highly relevant and analogous. See the first office action mapping of Frantz which is hereby incorporated by reference. Frantz also teaches
determining at least one second joint on an avatar based on the corresponding second pose of each of the target object, wherein the at least one second joint one-to-one corresponds to the at least one target object, and the avatar corresponds to the user body
{the BRI of “joint” includes an entire body part such as a head as per [0035] of the instant specification. IR object tracking for second and subsequent reference poses further disclosed in Fig. 5 [0035], [0048]-[0066]}; and
showing the avatar in the visual content of the reality service, wherein each of the at least one second joint on the avatar is shown with the corresponding second pose of each of the target object {see Fig. 5 including S31 updating AR pose and displaying/showing the object/AR visualization (avatar) in world coordinate system based on each detected pose of the object as per [0069]-[0071]}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Liao which already discloses the external tracking device, HMD host, pose tracking using SLAM, determining coordinate transforms and converting, by the host, the first pose of each of the at least one target object into a corresponding second pose of each of the at least one target object based on the converting relationship and accordingly providing a visual content of a reality service, such that providing the visual content includes determining at least one second joint on an avatar based on the corresponding second pose of each of the target object, wherein the at least one second joint one-to-one corresponds to the at least one target object, and the avatar corresponds to the user body and showing the avatar in the visual content of the reality service, wherein each of the at least one second joint on the avatar is shown with the corresponding second pose of each of the target object as taught by Zeng and/or Frantz because avatars helpfully represent the users by enhancing presence in virtual environments, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claims 11 and 20
The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of apparatus claim 11 and computer readable medium claim 20 while noting that the rejection above cites to both device and method disclosures. For the processor and computer readable storage medium storing program limitations of claims 11 and 20 see Liao [0028]. See also Zeng Experiments that describe a computer for running tests according to the disclosure including a processor CPU, memory and software programs
To the extent Liao’s processor is not viewed as including a computer readable storage medium, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Liao’s processor implementation to be implemented on a computer readable medium as software as taught by Zeng, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claims 3-6 and 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Liao in view of Zeng and/or Frantz as applied to claim 1 above, and further in view of Duijnhouwer (US 2025/0240400 A1),
Claim 3
In regards to claim 3, Liao discloses wherein the step of determining, by the host, the converting relationship between the first reference pose and the second reference pose comprises:
characterising the first reference pose coordinate systems) of the pose of the object in the world coordinate system as per [0030]-[0033], [0040]-[0048], [0067]-[0070]}
Although Liao and/or Frantz discloses determining, by the host, the converting relationship between the first reference pose and the second reference pose comprises: characterising the first and second reference pose in respective coordinate systems but does not describe characterizing the poses as vectors or determining a vector difference between the first vector and the second vector; and determining the vector difference as the converting relationship.
Duijnhouwer is analogous art from the same field of pose determination and augmented reality Head mounted displays with generated avatars. See abstract, Figs. 2, 3, 22, 23 and cites below.
Duijnhouwer also teaches that characterising the first and second reference pose in respective coordinate systems as vectors and determining a vector difference between the first vector and the second vector; and determining the vector difference as the converting relationship
{Fig. 2 user orientation module 429 that determines head pose and object detection that determines distance, orientation and/or angular position of the user 250 with respect to the world environment and objects within that environment, [0117]-[ 0121]. Note that a coordinate system transforms (e.g. between world coordinate frame and AR coordinate frame) involve conventional vector differences, also referred to as translation vectors when the conversion relationship involves a coordinate frame translation as taught by Duijnhouwer in [0204]-[0211] and reinforced by Zeng {see section 3.2 Calibration including equations (1) and (2) which determines the claimed vector difference}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Liao and/or Frantz which already discloses determining, by the host, the converting relationship between the first reference pose and the second reference pose comprises: characterising the first and second reference pose in respective coordinate systems such that this process characterizes the poses as vectors and determines a vector difference between the first vector and the second vector; and determining the vector difference as the converting relationship as taught by Duijnhouwer and reinforced by Zeng because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claim 4
In regards to claim 4, Liao is not relied upon to disclose but Duijnhouwer reinforced by Zeng teaches wherein the step of determining the vector difference between the first vector and the second vector comprises:
determining the vector difference via subtracting the second vector from the first vector {see Duijnhouwer in [0204]-[0211] and by Zeng {see section 3.2 Calibration including equations (1) and (2) which determines the claimed vector difference}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Liao and/or Frantz which already discloses determining, by the host, the converting relationship between the first reference pose and the second reference pose comprises: characterising the first and second reference pose in respective coordinate systems such that this process characterizes the poses as vectors and determines a vector difference between the first vector and the second vector; determining the vector difference as the converting relationship, and determining the vector difference via subtracting the second vector from the first vector as taught by Duijnhouwer reinforced by Zeng because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claim 5
In regards to claim 5, Liao is not relied upon to disclose but Duijnhouwer and Zeng teaches wherein the step of converting the first pose of each of the at least one target object into the corresponding second pose based on the converting relationship comprises:
determining the corresponding second pose via subtracting the vector difference from the first pose of each of the at least one target object.
{see Duijnhouwer in [0204]-[0211] and Zeng {see section 3.2 Calibration including equations (1) and (2) which determines the claimed vector difference}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Liao and/or Frantz which already discloses determining, by the host, the converting relationship between the first reference pose and the second reference pose comprises: characterising the first and second reference pose in respective coordinate systems such that this process characterizes the poses as vectors and determines a vector difference between the first vector and the second vector; determining the vector difference as the converting relationship, and determining the corresponding second pose via subtracting the vector difference from the first pose of each of the at least one target object as taught by Duijnhouwer reinforced by Zeng because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claim 6
In regards to claim 6, Liao discloses wherein the first coordinate system is different from the second coordinate system {see previous cites above wherein the inside-out and outside-in coordinate systems differ}.
Claims 13-16
The rejection of method claims 3-6 above applies mutatis mutandis to the corresponding limitations of apparatus claims 13-16 while noting that the rejection above cites to both device and method disclosures. For the processor and computer readable storage medium storing program limitations of claims 11 and 20 see Liao [0028]. See also Zeng Experiments that describe a computer for running tests according to the disclosure including a processor CPU, memory and software programs
To the extent Liao’s processor is not viewed as including a computer readable storage medium, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Liao’s processor implementation to be implemented on a computer readable medium as software as taught by Zeng, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hamada US 20250371827 A1 discloses generating avatars based on inside-out and outside-in tracking, SLAM and coordinate transforms. See [0167]-[0168].
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael R Cammarata whose telephone number is (571)272-0113. The examiner can normally be reached M-Th 7am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL ROBERT CAMMARATA/ Primary Examiner, Art Unit 2667