DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 39 is objected to because of the following informalities:
In claim 9, line 1, “according” should be “according to”
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 25-37 are rejected under 35 U.S.C. 101 because the claim 25 recites: “A video manipulation computer program” the body of the claim recites computer program steps, such as, “capture, provide, detect, fit, output”, which are nothing more than just programmed instructions to be performed by the system. Therefore, the steps/elements recited in claim 25 are non-statutory. Similarly, computer programs claimed as computer listings per se, i.e., the descriptions or expressions of the programs, are not physical “things.” They are neither computer components nor statutory processes, as they are not “acts” being performed. Such claimed computer programs do not define any structural and functional interrelationships between the computer program and other claimed elements of a computer which permit the computer program’s functionality to be realized. In contrast, a claimed non-transitory computer-readable medium encoded with a computer program is a computer element which defines structural and functional interrelationships between the computer program and the rest of the computer which permit the computer program’s functionality to be realized, and is thus statutory. Accordingly, it is important to distinguish claims that define descriptive material per se from claims that define statutory inventions.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 25-29 and 31-39 are rejected under 35 U.S.C. 103 as being unpatentable by Poliakov et al. (U.S. 2021/0090347 A1) in view of Xu et al. (U.S. 20170018024 A1).
Claims 1-24. (Cancelled)
Regarding Claim 25, Poliakov discloses a video manipulation computer program comprising instructions which, when executed on at least one processor of user device cause the user device to perform manipulating of an input video stream provided by an imaging device of the user device (Poliakov, [0025] “a video stream and presentation of modified objects within the video stream…includes systems, methods, instruction sequences, and computing machine program products” [0033] “the corresponding hardware (e.g., memory and processor) for executing the instructions” and [0004] “Telecommunications devices use physical manipulation of the device in order to perform operations, devices are typically operated by manipulating an input device” Poliakov teaches a video manipulation computer machine program includes instructions are executed by a processor to perform manipulating of an input video stream, wherein the video manipulation computer program being configured to:
capture the input video stream of the imaging device (Poliakov, [0026] “When the user taps or holds a video capture icon in the user interface, the camera begins to capture a video stream” Poliakov teaches capture the input video stream by a capture icon of the imaging device (a camera).
provide the captured input video stream as input video data to the manipulation computer program (Poliakov, [0004] “Telecommunications devices use physical manipulation of the device in order to perform operations, devices are typically operated by manipulating an input device, such as a touchscreen and modifying video streams in real time while the video stream is being captured” and Fig. 2, [0045] “the image capture component 210 directly receives the video stream captured by the image capture device. In some instances, the image capture component 210 passes all or part of the video stream (e.g., the set of images comprising the video stream) to one or more other components of the video modification system 160” Poliakov teaches provide the captured input video stream as input video data (the image capture component 210) to the manipulation computer program (the video modification system 160);
detect a person as a body item in the input video data, the body item representing the person in the input video data (Poliakov, [0047] “the object recognition component 220 includes facial tracking logic to identify all or a portion of a face within the one or more images and track landmarks of the face across the set of images of the video stream” Poliakov teaches detect (identify) a person as a body item which represent the person in the input video data e.g., the face of a person across the set of images of the input video stream;
fit a wearable video item on the body item in the input video data to provide manipulated video data (Poliakov, [0055] “The graphical representation of glasses is applied in a second subset of images occurring within the video stream as the video stream is being received” and [0056] “in applying the graphical representation of the glasses to the face. Based on the face characteristics, the rendering component 240 modifies the one or more dimensions to scale the graphical representation of the glasses to fit the portion of the face” Poliakov teaches fit a wearable video item (a glasses) on the body item (a face) in the input video data to provide manipulated video data (the rendering component 240 modifies the dimensions of face to fit in the glasses); and
However, Poliakov does not explicitly teach output the manipulated video data as manipulated video stream from the manipulation computer program.
Wu teaches output the manipulated video data as manipulated video stream from the manipulation computer program (Xu, [0005] “the programming including instructions to: map an image of clothes to the video stream of the upper body of the user according to the keypoints; and display an augmented video stream of the upper body of the user with the image of the clothes overlaid over a portion of the video stream of the upper body of the user” Xu teaches output (display) the manipulated video stream (map an image of clothes to the video stream of the upper body of the user) from the computer program (the programming).
Poliakov and Xu are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Poliakov to combine with output the manipulated video stream (as taught by Xu) in order to output the manipulated video stream because Xu can provide output (display) the manipulated video stream (map an image of clothes to the video stream of the upper body of the user) from the computer program (the programming) (Xu, [0005]). Doing so, it may provide an interface for efficiently placing the functions of a virtual clothes-fitting system on a small mobile screen in a visually pleasing and usable manner (Xu, [0014]).
Regarding Claim 26, a combination of Poliakov and Xu discloses the video manipulation computer program according to claim 25, wherein the video manipulation computer program is further configured to:
register the manipulated video stream to an operating system of the user device as virtual imaging device (Poliakov, [0003] “video conferencing allows two or more individuals to communicate with each other using a combination of software applications. Telecommunications devices may also record video streams to transmit as messages across a telecommunications network” and [0039] “An individual can register with the social messaging system 130 to become a member of the social messaging system 130” and Fig. 17, [0093] “the software 902 includes layers such as an operating system 904” Poliakov teaches register the manipulated video stream to an operating system as a software layer.
Regarding Claim 27, a combination of Poliakov and Xu discloses the video manipulation computer program according to claim 25, wherein:
the imaging device comprises an image sensor configured to generate the input video stream, and the manipulation computer program is configured to capture input video stream generated by the image sensor (Poliakov, [0078] “the image capture device of the product distribution machine is a camera, a digital image sensor (e.g., a CCD sensor or a CMOS sensor) capable of capturing the set of images of the video stream” and [0031] “the video modification system generates and modifies visual elements within the video stream based on data captured from the real-world environment” Poliakov teaches the imaging device (the camera) includes an image sensor to capture input video stream and generates visual elements within the input video stream; or
the imaging device comprises an image sensor configured to generate the input video stream, and to provide the input video stream to the output of the imaging device, and
the manipulation computer program is configured to capture input video stream from the output of the imaging device.
Regarding Claim 28, a combination of Poliakov and Xu discloses the video manipulation computer program according to claim 25, wherein the video manipulation computer program is further configured to:
identify a first body part item in the detected body item in the input video data, the first body part item representing one body part of the person (Poliakov, [0047] “the object recognition component 220 includes facial tracking logic to identify all or a portion of a face within the one or more images and track landmarks of the face across the set of images of the video stream” Poliakov teaches identify a first body part item in the detected body item which represent the body part of the person in the input video data e.g., identify all or a portion of a face of a person across the set of images of the input video stream; and
fit the wearable video item on the identified first body item in the input video data to provide the manipulated video data (Poliakov, [0028] “the processing components, tracking movement of the face and three-dimensional glasses model, adjust visual aspects of the glasses model to mimic differing lighting conditions, angles, shapes, or shadows resulting from movement of the face and three-dimensional glasses model…analyze the face, scale and fit the three-dimensional glasses model, and present the modified video stream in real time as the video stream including the face is being simultaneously captured” Poliakov teaches fit a wearable video item (a glasses) on the body item (a face) in the input video data to provide manipulated video data (present the modified video stream including the face is being simultaneously captured).
Regarding Claim 29, a combination of Poliakov and Xu discloses the video manipulation computer program according to claim 25, wherein the video manipulation computer program is further configured to:
remove background image item from the input video data, the background image item comprising image data outside the body item (Poliakov, [0065] “the rendering component 240 to generate and present a red light, indicating image capture device recording, and receiving a subsequent selection corresponding to a second change in eyebrow position may cause the red light to be removed from the graphical representation of the glasses, the red light may be positioned proximate to an image capture device depicted as part of the graphical representation of the glasses” Poliakov teaches remove the red light, indicating image capture device recording on the background image item (the glasses), is outside the body item (the face).
fit the wearable video item on the detected body item (Poliakov, [0055] “The graphical representation of glasses is applied in a second subset of images occurring within the video stream as the video stream is being received” and [0056] “in applying the graphical representation of the glasses to the face. Based on the face
characteristics, the rendering component 240 modifies the one or more dimensions to scale the graphical representation of the glasses to fit the portion of the face” Poliakov teaches fit a wearable video item (a glasses) on the body item (a face) in the input video data to provide manipulated video data (the rendering component 240 modifies the dimensions of face to fit in the glasses), and
combine the removed background image item and the body item having the wearable video item to provide the manipulated video data having the wearable video item on the body item (Poliakov, [0068] “FIG. 5 identifying and tracking a face within a first set of images of a video stream and generating a graphical representation of a set of glasses affixed to the face in real time in a second set of images of the video stream while the video stream is being captured” and [0067] “in FIG. 6, a three-dimensional model 600 of a pair of glasses is rendered on a user 602” Poliakov teaches combine (generate) the removed background image item (the glasses is removed the red light) and the body item (the face) having the wearable video item (the glasses) (Fig. 6); or
remove background image item from the input video data, the background image item comprising image data outside the identified first body part item, fit the wearable video item on the identified first body item, and
combine the removed background image item and the first body part item having the wearable video item to provide the manipulated video data having the wearable video item on the first body part item; or
remove background image item from the input video data, the background image item comprising image data outside the body item, separating the first body part item from the body item, fit the wearable video item on the separated first body part item, combine the first body part item to the body item, and
combine the removed background image item and the body item having the wearable video item to provide the manipulated video data having the wearable video item on the first body part item.
Regarding Claim 31, the video manipulation computer program according to claim 25, Poliakov does not explicitly teach wherein the video manipulation computer program is further configured to:
maintain a wearable video item database having one or more wearable video item profiles, each of the wearable video item profiles comprising a wearable video item.
However, Xu teaches maintain a wearable video item database having one or more wearable video item profiles, each of the wearable video item profiles comprising a wearable video item (Xu, [0025] “The processing unit 501 may include a mass storage device 530” and [0020] “FIG. 2, A live video stream of the user's upper body will be obtained and presented on the display 210 of the wireless device. Images of clothing from which the user may select may be shown in a portion of the screen as in screen 216 or the user may navigate to screen 220 to select clothing” Wu teaches maintain a wearable video item in storage having one or more wearable video item profile (fitting clothes, 220, Fig. 2) includes a wearable video item (fitting clothes).
Poliakov and Xu are combinable see rationale in claim 25; or
maintain a wearable video item database having one or more wearable video item profiles, each of the wearable video item profiles comprising a wearable video item and a wearable video item identifier, the wearable video item identifier being specific to the wearable video item in the wearable video item profile.
Regarding Claim 32, Poliakov discloses the video manipulation computer program according to claim 31, wherein:
the wearable video item identifier is an identifier image item (Poliakov, [0031] “the video modification system identifies faces and fits various three-dimensional models (e.g., glasses, clothing, accessories, hairstyles, or devices) to the faces depicted within a field of view of an image capture device” and [0105] “the communication components 1064 detect identifiers or include components operable to detect identifiers” Poliakov teaches the wearable video item identifier e.g., a glass is an identifier image item; or the wearable video item identifier is an identifier text item; or the wearable video item identifier is a machine-readable optical image item.
Regarding Claim 33, a combination Poliakov and Xu discloses the video manipulation computer program according to claim 31, wherein the video manipulation computer program is further configured to:
add the wearable video item identifier to the manipulated video data (Poliakov [0031] “the video modification system identifies faces and fits various three-dimensional models (e.g., glasses, clothing, accessories, hairstyles, or devices) to the faces depicted within a field of view of an image capture device” and [0105] “the communication components 1064 detect identifiers” and [0027] “The processing components then apply the three-dimensional glasses model to the face by affixing the three-dimensional model to at least one of the two-dimensional coordinates” Poliakov teaches add (apply) the wearable video item identifier (a 3D glasses) to the manipulated video data (the manipulated video of the face data); or
provide a wearable video item identifier layer comprising the wearable video item identifier; and
combine the wearable video item identifier layer to the manipulated video data (Poliakov, [0027] “The processing components then apply the three-dimensional glasses model to the face by affixing the three-dimensional model to at least one of the two-dimensional coordinates” Poliakov teaches combine (apply) the wearable video item identifier (a 3D glasses) layer to the manipulated video data (the manipulated video of the face data).
Regarding Claim 34, the video manipulation computer program according to claim 25, Poliakov does not explicitly teach wherein:
the wearable video item is a garment image item;
However, Xu teaches the wearable video item is a garment image item (Xu, [0020] “FIG. 2. In the live video stream acquisition mode 202. Images of clothing from which the user may select may be shown in a portion of the screen as in screen 216 or the user may navigate to screen 220 to select clothing” Xu the wearable video item is a garment image item, clothing, 220; or
the wearable video item is a clothing accessory image item; or
the wearable video item is an eyeglasses image item; or
the first body part item is head item and the wearable video item is a headwear image item or an eyeglasses image item; or
the first body part item is a torso item and the wearable video item is a shirt image item or a jacket image item.
Regarding Claim 35, the video manipulation computer program according to claim 34, Poliakov does not explicitly teach wherein:
the wearable video item is two-dimensional image item;
However, Xu teaches the wearable video item is two-dimensional image item (Xu, [0020] “FIG. 2. In the live video stream acquisition mode 202. Images of clothing from which the user may select may be shown in a portion of the screen as in screen 216 or the user may navigate to screen 220 to select clothing” Xu the wearable video item is 2D image item, clothing, 220; or
the wearable video item is a partly three-dimensional image item; or
the wearable video item is a three-dimensional image item.
Poliakov and Xu are combinable see rationale in claim 1
Regarding Claim 36, the video manipulation computer program according to claim 25, Poliakov does not explicitly teach wherein the video manipulation computer program is further configured to carry out the fitting the wearable video item separately for successive image frames of the input video data.
However, Wu teaches carry out the fitting the wearable video item separately for successive image frames of the input video data (Wu, [0020] “FIG. 2. In the live video stream acquisition mode 202. The indicator 222 (e.g., changing from a red dot to a green dot) on the display 212 prompts the user that they may proceed to the virtual clothes-fitting mode 206 when the user body part is in the correct place with respect to the guide lines 224 as shown in screen 214. Images of clothing from which…the user may navigate to screen 220 to select clothing, screen 220 is superimposed onto the image of the user's upper body in display 218” Wu teaches carry out the fitting the wearable video item separately (cloth selection 220, Fig. 2) for successive image frames of input video data (212, 214, 216, 218, Fig. 2).
Poliakov and Wu are combinable see rationale in claim 1.
Regarding Claim 37, a combination of Poliakov and Wu discloses the video manipulation computer program according to claim 25, wherein the video manipulation computer program is further configured to:
display the manipulated video stream on a display of the user device (Poliakov, [0037] Each of the client devices 110 can comprise a computing device that includes at least a display” and [0067] “in FIG. 6, a three-dimensional model 600 of a pair of glasses is rendered on a user 602” Poliakov teaches display the manipulated video stream on a display of the user device (Fig. 6); or
input the manipulated video stream to a communication computer program running in the user device.
Regarding Claim 38, a combination of Poliakov and Wu discloses a video communication system (Poliakov, [0003] “Telecommunications devices may also record video streams to transmit as messages across a telecommunications network, the video communication system”) comprising:
a video manipulation computer program comprising instructions which, when executed on at least one processor of a first user device cause the first user device to perform manipulating of an input video stream provided by an imaging device of the first user device (Poliakov, [0025] “a video stream and presentation of modified objects within the video stream…includes systems, methods, instruction sequences, and computing machine program products” [0033] “the corresponding hardware (e.g., memory and processor) for executing the instructions”) and [0004] “telecommunications applications and devices exist to provide two-way video communication between two devices. Telecommunications devices use physical manipulation of the device in order to perform operations. For example, devices are typically operated by changing an orientation of the device or manipulating an input device, such as a touchscreen” Poliakov teaches a video manipulation computer program (computing machine program) comprising instructions which, when executed on at least one processor, for example, a device, 1st device operates manipulation an input device.
a communication network configured to provide communication connection between two or more user device for data exchange between the two or more user devices (Poliakov, [0003] “Telecommunications devices may also record video streams to transmit as messages across a telecommunications network” and [0004] “telecommunications applications and devices exist to provide two-way video communication between two devices” Poliakov teaches a communication network configured to provide communication connection for data exchange between two user device; and
a communication computer program comprising instructions which, when executed on the at least one processor of the first user device cause the first user device to provide video conference connection with at least one second user device over the communication network and to exchange video data with the at least one second user device (Poliakov, [0003] Telecommunications applications and devices can provide communication between multiple users using a variety of media, video conferencing allows two or more individuals to communicate with each other using a combination of software applications, telecommunications devices, and a telecommunications network” Poliakov teaches providing video conference connection with at least one second user device over the communication network and to exchange video data with the at least one second user device wherein:
the manipulation computer program being configured to:
capture the input video stream of the imaging device,
provide the captured input video stream as input video data to the manipulation computer program,
detect a person as a body item in the input video data, the body item representing the person in the input video data,
fit a wearable video item on the body item in the input video data to provide manipulated video data, and
output the manipulated video data as manipulated video stream from manipulation computer program;
Claim 38 is substantially similar to claim 25 is rejected based on similar analyses.
the communication computer program being configured to:
receive the manipulated video stream (Poliakov, [0055] In operation 340, the rendering component 240 applies a graphical representation of glasses to the face based on the face characteristics. The graphical representation of glasses is applied in a second subset of images occurring within the video stream as the video stream is being received” Poliakov teaches receive the manipulated video stream, and
broadcast the manipulated video stream to the at least one second user device over the communication network (Poliakov, [0004] “telecommunications applications and devices exist to provide two-way video communication between two devices, such as modifying images within the video stream during pendency of a communication session” and [0003] “Telecommunications devices may also record video streams to transmit as messages across a telecommunications network” Poliakov teaches broadcast (transmit) the manipulation video stream (record video streams) to the 2nd device over the telecommunication network.
Regarding Claim 39, a combination of Poliakov and Xu discloses the video communication system according claim 38, wherein:
the video manipulation computer program is configured to register the manipulated video stream to an operating system of the user device as virtual imaging device (Poliakov, [0003] “video conferencing allows two or more individuals to communicate with each other using a combination of software applications. Telecommunications devices may also record video streams to transmit as messages across a telecommunications network” [0039] “An individual can register with the social messaging system 130 to become a member of the social messaging system 130” and Fig. 17, [0093] “the software 902 includes layers such as an operating system 904” Poliakov teaches register the manipulated video stream to an operating system as a software layer, and
However, Poliakov does not explicitly teach the communication computer program is configured receive the manipulated video stream as an output video stream from the virtual imaging device.
Xu teaches receive the manipulated video stream as an output video stream from the virtual imaging device ((Xu, [0005] “the programming including instructions to: map an image of clothes to the video stream of the upper body of the user according to the keypoints; and display an augmented video stream of the upper body of the user with the image of the clothes overlaid over a portion of the video stream of the upper body of the user” Xu teaches output (display) the manipulated video stream (map an image of clothes to the video stream of the upper body of the user) from the virtual image device.
Poliakov and Wu are combinable see rationale in claim 25.
Claims 30 is rejected under 35 U.S.C. 103 as being unpatentable by Poliakov et al. (U.S. 2021/0090347 A1) in view of Xu et al. (U.S. 20170018024 A1) and further in view of Wiesel et al. (U.S. 2020/0183969 A1).
Regarding Claim 30, the video manipulation computer program according to claim 29, a combination of Poliakov and Wu does not explicitly teach wherein the video manipulation computer program is further configured to:
split the input video data into a body item layer and a background layer, the body part layer comprising the detected body item and the background layer comprising image data outside the detected body item, fit the wearable video item on the detected body item in the body item layer, and combine the background layer and the body item layer to provide the manipulated video data having the wearable video item on the body item;
However, Wiesel teaches split the input video data into a body item layer and a background layer, the body part layer comprising the detected body item and the background layer comprising image data outside the detected body item, fit the wearable video item on the detected body item in the body item layer, and
combine the background layer and the body item layer to provide the manipulated video data having the wearable video item on the body item (Wiesel, [0074] “User extraction module process extracts the image of the user from the background…uses artificial intelligence techniques in order to distinguish between the user's body and clothes from the background…to different user poses and different backgrounds” and [0120] Reference is made to FIG. 10, a product image may be converted into a Product Mask/Template (image mask 1001); and a user's image may be converted into a User Body Mask/Template (image mask 1002)” and [0134] “FIG. 15, The system may utilize a virtual dressing module, to perform the combining of the product image and the user image” Wiesel teaches split (distinguish) the input video image into a body item layer and a background layer (Fig. 10, 1002), detect image data outside the detected body item, e.g., fit wearable video item on the detected body item, (Fig. 10, 1001) and the background layer and the body item layer having the wearable video item on the body item (Fig. 15);
Poliakov, Xu and Wiesel are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Poliakov to combine with splitting the input video data into a body item layer and a background layer and combining the background layer and the body item layer having the wearable video item on the body item (as taught by Wiesel) in order to split the input video data into a body item layer and a background layer and combine the background layer and the body item layer having the wearable video item on the body item because Wiesel can provide split (distinguish) the input video image into a body item layer and a background layer (Fig. 10, 1002), detect image data outside the detected body item, e.g., fit wearable video item on the detected body item, (Fig. 10, 1001) and the background layer and the body item layer having the wearable video item on the body item (Fig. 15) (Wiesel, [0074], [0120], [[0134]). Doing so, it may provide saving the user precious time and efforts, and enabling the user to receive on-the-spot immediate visual feedback with regard to the look of a clothes article or other product that is virtually dressed on an image of the user (Wiesel, [0070]); or
split the input video data into a first body part item layer and a background layer, the first body part item layer comprising the identified first body part item and the background layer comprising image data outside the identified first body part item,
fit the wearable video item on the identified first body item in the first body part item layer,
combine the background layer and the first body part item layer to provide the manipulated video data having the wearable video item on the first body part item; or
split the input video data into a body item layer and a background layer, the body item layer comprising the detected body item and the background layer comprising image data outside the detected body item,
split the body item layer into a first body part item layer and a second body item layer, the first body item layer comprising the identified first body part item and the second body item layer comprising body item outside the identified first body part item,
fit the wearable video item on the identified first body part item in the first body part item layer, and
combine first body part item layer, the second body item layer and the background layer to provide the manipulated video data having the wearable video item on the first body part item.
Conclusion
The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure Lo et al. (U.S. 2017/0193687 A1) and Zimring et al. (U.S. 2021/0365328 A1).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOA VU whose telephone number is (571)272-5994. The examiner can normally be reached 8:00- 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
/KHOA VU/Examiner, Art Unit 2611