Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “part” in claim group 1-12 and 15-16.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 10, 13-18 and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Miyata (US Patent Pub. No.: US 2024/0112498 A1).
Regarding claim 1, Miyata teaches a processing system, comprising: an imaging part acquiring an image (As illustrated in FIG. 1, the system 1 includes a camera 2 and a task analysis device 5 which is an example of the image analysis device in the present embodiment. [0025]) by imaging a work site (The system 1 is used for analyzing the efficiency or the like of operators W1, W2, and W3 who perform a plurality of tasks in a work site 6, such as a logistics
warehouse. [0025])
PNG
media_image1.png
744
520
media_image1.png
Greyscale
from above (Fig. 1, Camera 2); a detecting part acquiring a coordinate of a person visible in the image (The trajectory data D0 is generated based on the recognition result obtained by inputting image data acquired from the camera 2 to the image recognizer 51, for example. The map data D1 indicates a layout of various facilities such as the conveyor line 61 and the shelves 62 in the work site 6, with respect to a predetermined coordinate system. [0038]) by detecting the person in the image (The camera 2 in the system 1 is disposed to capture image of an entire area where the operators W1 to W3 move across the work site 6. [0029]); an identifying part identifying the person and associating identification information with the detected person (In the image recognizer 51 according to this embodiment, a person such as the operator W is set as a processing target. The recognition result may include information indicating a time at which the position of the processing target is recognized, for example. [0034]); a tracking part tracking the person associated with the identification information by using the coordinate (In the generation of the task combination tables T1, T2 (S11), the controller 50 may narrow down the candidates of the operator W using the coordinate information in the trajectory data D0 and the moving distance per unit time. [0109]); and a calculator calculating a dwell time of the person in a work area by using a result of the tracking (The task analysis system 1 illustrated in FIG. 1 recognizes the positions of the operator W at each time (i.e., the trajectory) in the work site 6, through the image recognition processing, and recognizes a performed task that is a task performed by the operator in the respective position at each time. [0059]. Upon obtaining the total number of times by which each task is performed during the analysis period for each of the operators, the controller 50 calculates the ratio of each task performed by the specific operator to generate the analysis chart 7. The analysis chart 7 indicates the ratio of each task, as a ratio of the time for which the task is performed, with respect to the analysis period, for example. [0083]), the work area being set in the work site (FIG. 3 is a view for explaining the map data D1. The map data D1 manages an arrangement of sections and task areas, which are described later, associating with data indicating a coordinate system of a map, such as a layout of various facilities in the work site 6 viewed from above. [0049].
PNG
media_image2.png
782
714
media_image2.png
Greyscale
).
Regarding claim 2, Miyata teaches the system according to claim 1, wherein the detecting part detects each of a plurality of the persons (In the image recognizer 51 according to this embodiment, a person such as the operator W is set as a processing target. [0034]) when the plurality of persons is visible in the image (The system 1 is used for analyzing the efficiency or the like of operators W1, W2, and W3 who perform a plurality of tasks in a work site 6, such as a logistics warehouse. [0025]), the identifying part associates a plurality of the identification information respectively with the plurality of persons (The identification information D3 is information identifying an individual, such as each of the operators W1 to W3. [0038]. The identification information D3 includes information indicating a time at which the identification operation is received from each of the operators W1 to W3, for example. [0056]), the tracking part tracks each of the plurality of persons (In the generation of the task combination tables T1, T2 (S11), the controller 50 may narrow down the candidates of the operator W using the coordinate information in the trajectory data D0 and the moving distance per unit time. [0109]. Hereinafter, the operators W1 to W3 are also referred to as operators W. [0025]), and the calculator calculates the dwell times that each of the plurality of persons dwells in each of a plurality of the work areas (The task analysis system 1 illustrated in FIG. 1 recognizes the positions of the operator W at each time (i.e., the trajectory) in the work site 6, through the image recognition processing, and recognizes a performed task that is a task performed by the operator in the respective position at each time. [0059]. Upon obtaining the total number of times by which each task is performed during the analysis period for each of the operators, the controller 50 calculates the ratio of each task performed by the specific operator to generate the analysis chart 7. The analysis chart 7 indicates the ratio of each task, as a ratio of the time for which the task is performed, with respect to the analysis period, for example. [0083]).
Regarding claim 10, Miyata teaches the system according to claim 1, further comprising: a recognizing part recognizing the work area in the image (FIG. 3 is a view for explaining the map data D1. The map data D1 manages an arrangement of sections and task areas, which are described later, associating with data indicating a coordinate system of a map, such as a layout of various facilities in the work site 6 viewed from above. [0049].
PNG
media_image2.png
782
714
media_image2.png
Greyscale
), the recognizing part recognizing the work area by using a marker visible in the image or a coordinate of the work area, the coordinate of the work area being preset (The trajectory data D0 is generated based on the recognition result obtained by inputting image data acquired from the camera 2 to the image recognizer 51, for example. The map data D1 indicates a layout of various facilities such as the conveyor line 61 and the shelves 62 in the work site 6, with respect to a predetermined coordinate system. [0038]).
Regarding claim 13, Miyata teaches the system according to claim 1, wherein the calculator refers to a work time slot in which work is to be performed in the work area (The analysis chart 7 in the system 1 presents a ratio of items including "main task", "sub-task", and "non-task" performed by each of the operators W1 to W3 in the analysis period. [0027]), and does not include, in the dwell time, a time that the person dwells in the work area during a time slot other than the work time slot (An idling state and the like not related to the main task is classified as non-task. [0027]).
Regarding claim 14, Miyata teaches the system according to claim 1, wherein the calculator does not include, in the dwell time, a time in which the person dwells in the work area and moves at not less than a prescribed speed (An idling state and the like not related to the main task is classified as non-task. [0027]. It is common knowledge that a person moving very fast (i.e., at not less than a prescribed speed) is not likely to be doing the work.).
Regarding claim 15, Miyata teaches the system according to claim 1, further comprising: an output part displaying an object overlaid on the image (FIG. 3 is a view for explaining the map data D1. The map data D1 manages an arrangement of sections and task areas, which are described later, associating with data indicating a coordinate system of a map, such as a layout of various facilities in the work site 6 viewed from above. [0049].
PNG
media_image3.png
711
649
media_image3.png
Greyscale
)
and displaying the dwell time associated with the object (The system 1 may include a monitor 4 for presenting a user 3, such as an administrator or an analyzer in the work site 6, with an analysis chart 7 regarding a predetermined analysis period. [0025]. Upon obtaining the total number of times by which each task is performed during the analysis period for each of the operators, the controller 50 calculates the ratio of each task performed by the specific operator to generate the analysis chart 7. The analysis chart 7 indicates the ratio of each task, as a ratio of the time for which the task is performed, with respect to the analysis period, for example. [0083].
PNG
media_image4.png
744
520
media_image4.png
Greyscale
), the object indicating the work area (FIG. 3 is a view for explaining the map data D1. The map data D1 manages an arrangement of sections and task areas, which are described later, associating with data indicating a coordinate system of a map, such as a layout of various facilities in the work site 6 viewed from above. [0049].
PNG
media_image2.png
782
714
media_image2.png
Greyscale
).
Regarding claim 16, Miyata teaches the system according to claim 15, wherein the output part is configured to display a user interface, the image is displayed in the user interface (The output I/F 55 outputs signals such as video signals to an external display device such as a monitor and a projector for displaying various types of information, in compliance with the HDMI standard, for example. [0042]), and the output part sets a coordinate of the work area based on an input of a user for the image (FIG. 3 is a view for explaining the map data D1. The map data D1 manages an arrangement of sections and task areas, which are described later, associating with data indicating a coordinate system of a map, such as a layout of various facilities in the work site 6 viewed from above. Hereinafter, two directions that are orthogonal to each other on a horizontal plane in the work site 6 are referred to as an X direction and a Y direction, respectively. The position in the work site 6 is defined by an X coordinate indicating a position in the X direction and a Y coordinate indicating a position in the Y direction, for example. [0049].
PNG
media_image2.png
782
714
media_image2.png
Greyscale
).
Apparatus claim 17 is drawn to the corresponding apparatus claimed in claim 1. Therefore apparatus claim 17 corresponds to apparatus claim 1 and is rejected for the same reasons of anticipation as used above.
Method claim 18 drawn to the method of using the corresponding apparatus claimed in claim 1. Therefore method claim 18 corresponds to apparatus claim 1 and is rejected for the same reasons of anticipation as used above.
Claim 20 is drawn to a non-transitory computer-readable storage medium having executable instructions stored for carrying out the method of claim 1. Therefore, claim 20 corresponds to apparatus claim 1, and is rejected for the same reasons of anticipation as used above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-4, 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Miyata (US Patent Pub. No.: US 2024/0112498 A1), hereinafter Miyata, in view of Ahmed (A Robust Features-Based Person Tracker for Overhead Views in Industrial Environment, IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 3, JUNE 2018), hereinafter Ahmed.
Regarding claim 3, Miyata teaches all of the elements of the claimed invention as stated in claim 1 except for the following limitations as further recited. However, Ahmed teaches wherein the imaging part acquires a plurality of the images including a first image, a second image, and a third image, the third image is acquired after the first image, the second image is acquired between the first image and the third image, when the person is detected in the first and third images but not in the second image (The video total time length is 76 s with nearly 30 s of missing data. Page 1604 right column 3rd paragraph. The image before the image with the missing data is the first image. The image after the image with the missing data is the third image. The image with the missing data is the second image.), the tracking part performs recovery processing including: setting a prescribed range referenced to the coordinate of the person in the first image (In order to verify the correctness of detected position a distance threshold is chosen. This distance threshold is measured in pixels, i.e., difference in number of pixels between the detected position and preactual position (ground truth). Page 1602 right column 1st paragraph); calculating a time difference between a time at which the first image is acquired and a time at which the third image is acquired (The video total time length is 76 s with nearly 30 s of missing data. Page 1604 right column 3rd paragraph); and deeming the person visible in the third image to be the same as the person visible in the first image when the coordinate of the person in the third image is within the range (In order to verify the correctness of detected position a distance threshold is chosen. This distance threshold is measured in pixels, i.e., difference in number of pixels between the detected position and preactual position (ground truth). Page 1602 right column 1st paragraph) and the time difference is less than a prescribed threshold (The 30 missing seconds comprises of total 730 frames that are missing. Page 1604 right column 3rd paragraph).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Miyata to incorporate the teachings of Ahmed to perform tracking recovery processing when there is occlusion and deem the person visible in the third image to be the same as the person visible in the first image when the coordinate of the person in the third image is within the range and the time difference is less than a prescribed threshold in order to provide a more effective and robust algorithm for tracking people in videos obtained through camera installed overhead of the motion space.
Regarding claim 4, Ahmed in the combination teaches the system according to claim 3, wherein the tracking part associates, with the person visible in the third image deemed to be the same, the identification information associated with the person visible in the first image (The trajectories of person’s detected positions can be seen in Fig. 9. The detected positions are found to be close to ground truths as depicted in Fig. 9(a). Page 1604 right column last paragraph).
Regarding claim 9, Ahmed in the combination teaches the system according to claim 3, wherein the tracking part stops tracking the person when a state in which the person is not detected continues for not less than a prescribed period (When the rHOG algorithm is unable to detect any conformity to existing tracker object up to five frames then that particular object is removed. Page 1601 left column last paragraph).
Method claim 19 drawn to the method of using the corresponding apparatus claimed in claim 3. Therefore method claim 19 corresponds to apparatus claim 3 and is rejected for the same reasons of obviousness as used above.
Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Miyata (US Patent Pub. No.: US 2024/0112498 A1), hereinafter Miyata, in view of Ahmed (A Robust Features-Based Person Tracker for Overhead Views in Industrial Environment, IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 3, JUNE 2018), hereinafter Ahmed, further in view of Benaskeur (Cooperation in Distributed Surveillance, 2010 International Conference on Autonomous and Intelligent Systems, AIS 2010, Povoa de Varzim, Portugal, 2010, pp. 1-6, doi: 10.1109/AIS.2010.5547043), hereinafter Benaskeur.
Regarding claim 5, Miyata and Ahmed teach all of the elements of the claimed invention as stated in claim 3 except for the following limitations as further recited. However, Benaskeur teaches wherein the tracking part changes the range according to a change of a step count of the person after the first image is acquired (The original tracking platform updates the others as it gains additional confidence in the target’s trajectory. When the target leaves the sensing region (i.e., a change of a step count of the person) of the original tracking platform, the other platforms will have the latest prediction on where the target is going (i.e., the tracking part changes the range). Page 3 right column 2nd paragraph).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Miyata and Ahmed to incorporate the teachings of Benaskeur to change the range of the tracking part according to a change of a step count of the person after the first image is acquired in order to ensure the continuity of tracking of critical targets.
Regarding claim 6, Benaskeur in the combination teaches the system according to claim 5, wherein the tracking part stops tracking the person when the change of the step count is greater than a prescribed threshold (The original tracking platform updates the others as it gains additional confidence in the target’s trajectory. When the target leaves the sensing region (i.e., the change of the step count is greater than a prescribed threshold) of the original tracking platform, the other platforms will have the latest prediction on where the target is going (i.e., the original tracking platform stops tracking the person). Page 3 right column 2nd paragraph).
Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Miyata (US Patent Pub. No.: US 2024/0112498 A1), hereinafter Miyata, in view of Ahmed (A Robust Features-Based Person Tracker for Overhead Views in Industrial Environment, IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 3, JUNE 2018), hereinafter Ahmed, further in view of Zhang (Chinese Patent Pub. No.: CN 114913198 A), hereinafter Zhang.
Regarding claim 8, Miyata teaches the system according to claim 3, wherein the tracking part calculates a distance between the coordinate of the person in the first image and a coordinate of an article in the work site (In the example illustrated in FIG. 6C, two operators W are recognized at positions within the task area Al associated with two tasks, that is, packing and box preparation. In such a case, based on a relation that the box preparation is performed on the upstream side of the conveyor line 61 ( + Y direction in FIG. 3), the controller 50 recognizes the performed tasks at the respective positions of the operators W, for example. [0078]. A distance between the operator W and the conveyor line 61 can be calculated based on their respective coordinates.).
Miyata does not teach the following limitations as further recited, but Zhang further teaches the tracking part performs the recovery processing (In a specific implementation, if the tracking result of the first target at the current time obtained in step S101 is an unmatched tracking, occlusion recovery and trajectory prediction are performed. Page 13 5th paragraph) when the distance is less than a prescribed threshold (step S1023, searching for a target which has the same motion attribute and object attribute as the first target and has a distance smaller than a preset threshold value with the first target from the plurality of targets. Page 13 9th paragraph).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Miyata and Ahmed to incorporate the teachings of Zhang to perform tracking recovery when the distance is less than a prescribed threshold in order to predict and track the track of the shielded object, and are favorable for ensuring the integrity of target tracking.
Regarding claim 7, Ahmed in the combination teaches the system according to claim 3, wherein the tracking part calculates a movement speed of the person by using the time at which the first image is acquired and the coordinate of the person in the first image (However, in order to calculate the related position initially the velocity and its magnitude is measured as shown in (6) and (7). Page 1601 left column 1st paragraph).
Zhang in the combination further teaches the tracking part performs the recovery processing (In a specific implementation, if the tracking result of the first target at the current time obtained in step S101 is an unmatched tracking, occlusion recovery and trajectory prediction are performed. Page 13 5th paragraph) when the movement speed is less than a prescribed threshold (step S1023, searching for a target which has the same motion attribute and object attribute as the first target and has a distance smaller than a preset threshold value with the first target from the plurality of targets. Page 13 9th paragraph).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Miyata (US Patent Pub. No.: US 2024/0112498 A1), hereinafter Miyata, in view of Masaya (PCT Patent Pub. No.: WO 2023/286149 A1), hereinafter Masaya.
Regarding claim 11, Miyata teaches all of the elements of the claimed invention as stated in claim 10 except for the following limitations as further recited. However, Masaya teaches wherein the recognizing part also recognizes a passageway included in the work area (If the work analysis unit 53 cannot confirm that the position of the smart tag 3A belongs to a position that can be determined as valid work, the work with the position information and time interval during which it cannot be confirmed is determined as invalid work. Page 13 last paragraph), and a time that the person dwells in the passageway is not included in the dwell time (Specifically, based on the time information and the motion information, the work analysis unit 53 calculates a time interval (period) during which effective work cannot be confirmed. is determined to be invalid work. Page 13 7th paragraph).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Miyata to incorporate the teachings of Masaya to recognize a passageway included in the work area and a time that the person dwells in the passageway is not included in the dwell time in order to analyze the productivity of the worker.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Miyata (US Patent Pub. No.: US 2024/0112498 A1), hereinafter Miyata, in view of Hidekazu (PCT Patent Pub. No.: WO 2023/119389 A1), hereinafter Hidekazu.
Regarding claim 12, Miyata teaches all of the elements of the claimed invention as stated in claim 1 except for the following limitations as further recited. However, Hidekazu teaches wherein the detecting part determines whether or not the person visible in the image is a worker (The identification unit (1c) matches the arbitrary person indicated by the movement data with the worker by taking the time as a key on the basis of the work data input by the input unit (1a) and the movement data generated from the detection unit (1b), and identifies the arbitrary person. Abstract), and the calculator does not include, in the dwell time, a time that the person other than a worker dwells in the work area (Then, the calculation unit 15 calculates an index indicating the efficiency (productivity) or amount of work for each worker based on the movement data and work data of the workers associated by the association unit 14. Page 9 7th paragraph. In other words, if the identification unit identifies a person as not a worker, then no calculation of productivity will be performed for this person.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Miyata to incorporate the teachings of Hidekazu to determine whether or not the person visible in the image is a worker, and the calculator does not include, in the dwell time, a time that the person other than a worker dwells in the work area in order to grasp the efficiency or amount of workers' work in a work area.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEI ZHAO whose telephone number is (703)756-1922. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VU LE can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LEI ZHAO/Examiner, Art Unit 2668
/VU LE/Supervisory Patent Examiner, Art Unit 2668