DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 2 objected to because of the following typographical error: the last limitation of the claim states "binds it to the user" where the term "it" should be given more context such as "bind the registration notice to the user". Appropriate correction is required.
Claim 3 objected to because of the following typographical error: the limitation of the claim states "it also displays" where the term "it" should be "the displaying screen". Appropriate correction is required.
Claim 7 objected to because of the following typographical error: the limitation of the claim states "style, another, and" where the term "another" could be "another classifying label input field". Appropriate correction is required.
Claim 10 objected to because of the following typographical error: the last limitation of the claim states "the APP" and should be "the application program". Appropriate correction is required.
Specification
The disclosure is objected to because of the following typographical error: paragraph 67 in the specification states the usage of a "Figure 6" where based on the drawings provided, "Figure 6" should be referred to as "Fig. 6". Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 7, and 9-10 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sarna et al. (U.S. Pub. No. 20200175740).
Regarding claim 1, Sarna discloses the method for rapidly generating multiple customized user avatars (Figure 6, 600; also, paragraph 17, line(s) 1-2 "FIG. 6 is a flowchart illustrating a first example method for creating a cartoon using a classifier.") includes: through an Internet (Figure 1, 102; also, paragraph 72, line(s) 1-8 "The network 102 may be a conventional type, wired or wireless, and may have numerous different configurations including a star configuration, token ring configuration or other configurations. Furthermore, the network 102 may include a local area network (LAN), a wide area network (WAN) (e.g., the Internet), and/or other interconnected data paths across which multiple devices may communicate"), an electronic device transmits a plurality of classifying labels selected by a user (paragraph 7, line(s) 25-27 "editing the cartoon asset for the attribute included in the cartoon avatar based on the user input"; also, Figure 1, 106a; also, paragraph 74, line(s) 1-9 "the computing device 106 (any or all of 106a through 106n) can be any computing device that includes a memory and a processor. For example, the computing device 106 can be a laptop computer, a desktop computer, a tablet computer, a mobile telephone, a smart phone, a personal digital assistant, a mobile email device, a portable game player, a portable music player, a television with one or more processors embedded therein or coupled thereto or any other electronic device"; also, Figure 4B, 400, 416; also, paragraph 61, line(s) 1-4 "the method 400 continues in block 416 to create one or more attribute specific classifiers. In block 416, the method 400 selects an attribute"; also, paragraph 64, line(s) 18-24 "The process in blocks 416 to 428 can be used to produce similar classifiers for other attributes such as skin tone, eye shape, nose, hairstyles, etc. If there are more attributes that need classifiers, the method 400 return to block 416 to select the next attribute needing a classifier. If not, the method 400 continues to begin use of the generated classifiers") from a classifying label selection interface of an application program displayed on a displaying screen of the electronic device to a model training server (paragraph 25, line(s) 3-4 "the application server 108 may include a processor 216,"; also, paragraph 26, line(s) 10-14 "In some implementations, the processor 216 may be capable of generating and providing electronic display signals to a display device, supporting the display of images, capturing and transmitting images, performing complex tasks including various types of feature extraction and sampling, etc."; also, paragraph 32, line(s) 1-7 "the memory 218 may include the cartoon generation module 116o and the classifier creation module 118. The cartoon generation module 116o comprises a user interface module 202, one or more classifiers 204, a cartoon asset module 206, and a rendering module 208. The classifier creation module 118 comprises a training module 222"; also, paragraph 33, line(s) 3-12 "the user interface module 202, the one or more classifiers 204, the cartoon asset module 206, and the rendering module 208 are sets of instructions executable by the processor 216 to provide their respective acts and/or functionality. In other implementations, the user interface module 202, the one or more classifiers 204, the cartoon asset module 206, and the rendering module 208 are stored in the memory 218 of the computing device 106 and are accessible and executable by the processor 216 to provide their respective acts and/or functionality"), the model training server receiving those classifying labels (Figure 5, 506; also, paragraph 66, line(s) 3 "The method 500 accesses 502 a large image data set"; also, paragraph 66, line(s) 10-12 "In block 506, the method 500 uses the images to train a deep neural network 226"; also, paragraph 66, line(s) 18-20 "method 500 then searches 508 for a smaller set of images that include faces (classifier for human faces) from the large image data set"), organizing those classifying labels into a label parameter group (paragraph 66, line(s) 22-24 "The method 500 then cleans 510 the smaller set of images from block 508 by removing images that do not match the class."), extracting a model parameter list from a database server (paragraph 27, line(s) 5-8 "The memory 218 is also capable of storing other instructions and data, including, for example, an operating system, hardware drivers, other software applications, databases, etc."; also, paragraph 40, line(s) 1-4 "The cartoon asset module 206 may be steps, processes, functionalities or a device including routines for storing, managing, and providing access to the individual sets of cartoon assets."), further screening out corresponding a plurality of model parameters same as or similar to the label parameter group (paragraph 66, line(s) 24-29 "The method 500 then crowd sources 512 the clean smaller set of images to produce positive images (tuples of image, attribute, label). In some implementations, this can be done by a dedicated group of human raters rather than crowdsourcing."), then, based on those model parameters, extracting corresponding a plurality of avatar models from the database server (paragraph 27, line(s) 1-11 "The memory 218 may store and provide access to data to the other components of the application server 108. In some implementations, the memory 218 may store instructions and/or data that may be executed by the processor 216. The memory 218 is also capable of storing other instructions and data, including, for example, an operating system, hardware drivers, other software applications, databases, etc. The memory 218 may be coupled to the bus 214 for communication with the processor 216, the communication unit 220, the data store 222 or the other components of the application server 108."; also, paragraph 67, line(s) 1-19 "the method 600 continues to block 606 and processes image with a shallow neural network model 228 as a classifier 204 to generate one or more attribute(s) and label(s). One implementation of this process 606 is described in more detail below with reference to FIG. 7. Then the method 600 determines 608 the cartoon assets corresponding to the attribute and label. For example, the cartoon asset module 206 may search the cartoon assets for particular assets that match the attribute and label provided by the block 606. Once the cartoon assets corresponding to the attribute and label have been identified"), packing those avatar models and transmitting them to the application program, the application program receiving those avatar models, unpacking them (paragraph 76, line(s) 1-16 "The application server 108 may be a computing device that includes a processor, a memory and network communication capabilities. The application server 108 is coupled to the network 102 via a signal line 110. The application server 108 may be configured to include the cartoon generation module 116o and the classifier creation module 118 in some implementations. The application server 108 is a server for handling application operations and facilitating interoperation with back end systems. Although only a single application server 108 is shown, it should be understood that there could be any number of application servers 108 sending messages to the computing devices 106, creating and providing classifiers, and generating cartoons from photos. The application server 108 may communicate with the computing devices 106a through 106n via the network 102. The application server 108 may also be configured to receive status and other information from the computing devices 106a through 106n via the network 102."), and showing them on the displaying screen for the user to select (paragraph 7, line(s) 23-28 "presenting the cartoon avatar on a display of a user device, receiving user input on the display, editing the cartoon asset for the attribute included in the cartoon avatar based on the user input, and re-rendering the cartoon avatar based on the edited cartoon asset").
Regarding claim 2, Sarna discloses the method defined in claim 1, the application program receives the user's selection of one of those avatar models (paragraph 7, line(s) 25-27 "editing the cartoon asset for the attribute included in the cartoon avatar based on the user input"; also, paragraph 43, line(s) 2-4 "he cartoon asset module 206 receives one or more user selections of a cartoon style and an emotional expression"), then transmits selected the avatar model along with a registration notice to the database server (paragraph 7, line(s) 25-27 "editing the cartoon asset for the attribute included in the cartoon avatar based on the user input"; also, paragraph 31, line(s) 8-12 "The user application is coupled for communication with the respective computing device 106 or the application server 108 to receive, send or present messages, status, commands and other information."; also, paragraph 74, line(s) 1-9 "the computing device 106 (any or all of 106a through 106n) can be any computing device that includes a memory and a processor. For example, the computing device 106 can be a laptop computer, a desktop computer, a tablet computer, a mobile telephone, a smart phone, a personal digital assistant, a mobile email device, a portable game player, a portable music player, a television with one or more processors embedded therein or coupled thereto or any other electronic device"), the database server, in turn, updates a status code of corresponding the avatar model to "registered" as per the registration notice, and binds it to the user (paragraph 76, line(s) 1-16 "The application server 108 may be a computing device that includes a processor, a memory and network communication capabilities. The application server 108 is coupled to the network 102 via a signal line 110. The application server 108 may be configured to include the cartoon generation module 116o and the classifier creation module 118 in some implementations. The application server 108 is a server for handling application operations and facilitating interoperation with back end systems. Although only a single application server 108 is shown, it should be understood that there could be any number of application servers 108 sending messages to the computing devices 106, creating and providing classifiers, and generating cartoons from photos. The application server 108 may communicate with the computing devices 106a through 106n via the network 102. The application server 108 may also be configured to receive status and other information from the computing devices 106a through 106n via the network 102.").
Regarding claim 3, Sarna discloses the method defined in claim 1, the displaying screen shows those avatar models (paragraph 25, line(s) 3-4 "the application server 108 may include a processor 216,"; also, paragraph 26, line(s) 10-14 "In some implementations, the processor 216 may be capable of generating and providing electronic display signals to a display device, supporting the display of images, capturing and transmitting images, performing complex tasks including various types of feature extraction and sampling, etc."; also, paragraph 33, line(s) 3-12 "the user interface module 202, the one or more classifiers 204, the cartoon asset module 206, and the rendering module 208 are sets of instructions executable by the processor 216 to provide their respective acts and/or functionality. In other implementations, the user interface module 202, the one or more classifiers 204, the cartoon asset module 206, and the rendering module 208 are stored in the memory 218 of the computing device 106 and are accessible and executable by the processor 216 to provide their respective acts and/or functionality"), it also displays a re-generation icon, if the user clicks the re-generation icon, the application program transmits a regeneration notice to the model training server (paragraph 31, line(s) 8-12 "The user application is coupled for communication with the respective computing device 106 or the application server 108 to receive, send or present messages, status, commands and other information."; also, paragraph 35, line(s) 1-12 "The user interface module 202 generates user interfaces for a user to input photos, control the generation of cartoon images or icons, and use the cartoon images or icons in one or more user applications. For instance, the user interface module 202 is coupled to the other components of the application server 108 to send and/or receive content including images and control inputs to and/or from the one or more classifiers 204, the cartoon asset module 206, and the rendering module 208. The user interface module 202 is also coupled to the bus 214 for communication and interaction with the other components of the application server 108 and the system 100."; also, paragraph 32, line(s) 1-7 "the memory 218 may include the cartoon generation module 116o and the classifier creation module 118. The cartoon generation module 116o comprises a user interface module 202, one or more classifiers 204, a cartoon asset module 206, and a rendering module 208. The classifier creation module 118 comprises a training module 222").
Regarding claim 7, Sarna discloses the method defined in claim 1, those classifying labels including garment, action, object, person, background, ornament, style, another, and a classifying label input field (paragraph 36, line(s) 7-23 "A label corresponds to a value of one or more attributes of the user including the face. For example, attributes of faces may include: skin tone/color, hair length (short, long, medium, bald), hair color (black, dark brown, light brown, auburn, orange, strawberry blonde, dirty blonde, bleached blonde, grey and white), hair texture (straight, wavy, curly, coily), age, gender, eye shape (round eyes, close-set eyes, wide-set eyes, deep-set eyes, prominent eyes, downturned eyes, upturned eyes), mouth shape, jaw/face shape (round, square, triangle, oval), face width (narrow, average, wide), nose shape, eye color, facial hair, face parts composition (proportions), hair type, mouth color, hair style, glasses, ear shape, face markings, headwear, secondary hair colors (non-natural etc.), eyebrow shape (arched, steep arched, S-shaped, rounded, straight), eyebrow thickness (thin, natural, thick), lip shape and a number of other attributes.").
Regarding claim 9, Sarna discloses the method defined in claim 1, the model training server further includes a similarity computing to compute a degree of model similarity between those avatar models in the same group (paragraph 66, line(s) 30-36 "the method 500 mixes 514 the clean smaller set of images (positive images) with negative images to produce a base set of image. The method 500 splits 516 this base set of images into a training set and a testing set. Then the method 500 trains 518 the classifier of a shallow neural network using the training data. The method 500 tests 518 the performance of classifier using testing set. If the performance is satisfactory, the classifier is provided for use.").
Regarding claim 10, Sarna discloses the method defined in claim 1, classifying labels are preset by a user behavior of the user, the user behavior collects interaction data on the platform related to the user through the APP (paragraph 78, line(s) 2-7 "In situations in which certain implementations discussed herein may collect or use personal information about users (e.g., user data, information about a user's social network, user's location, user's biometric information, user's activities and demographic information)").
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sarna et al. (U.S. Pub. No. 20200175740) in view of WANG (CN Pub. No. 2023/11600735).
Regarding claim 4, Sarna discloses the method defined in claim 3, the model training server receives the regeneration notice (paragraph 31, line(s) 8-12 "The user application is coupled for communication with the respective computing device 106 or the application server 108 to receive, send or present messages, status, commands and other information."; also, paragraph 32, line(s) 1-7 "the memory 218 may include the cartoon generation module 116o and the classifier creation module 118. The cartoon generation module 116o comprises a user interface module 202, one or more classifiers 204, a cartoon asset module 206, and a rendering module 208. The classifier creation module 118 comprises a training module 222") and uses a deep-learning text-to-image diffusion model to generate those avatar models in real time, after packing, those avatar models are transmitted to the application program, the application program unpacks received those avatar models and shows them on the displaying screen for the user to select (paragraph 7, line(s) 23-24"presenting the cartoon avatar on a display of a user device"; also, paragraph 31, line(s) 8-12 "The user application is coupled for communication with the respective computing device 106 or the application server 108 to receive, send or present messages, status, commands and other information."). Sarna does not disclose the use of a deep-learning text-to-image diffusion model to generate those avatar models.
However, in a similar field of endeavor, Wang discloses the use of a stable diffuse model which is a deep-learning text-to-image generation model used to generate high quality images (attached foreign patent with machine translation, paragraph 43, line(s) 1-2 "a Diffusion model to generate a character model, wherein the Diffusion model is a deep learning text-to-image generation model for generating high-quality images.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Sarna's invention of the method defined in claim 3, the model training server that receives the regeneration notice, and such avatar model that are transmitted to the application program where the application program unpacks said model and shows them on the displaying screen of the user to select with the features of a stable diffuse model which is a deep-learning text-to-image generation model used to generate high quality images. As demonstrated by Wang, one could specify the usage of a diffusion model as the framework or architecture that uses the deep neural network to generate various avatar models.
Regarding claim 5, Sarna discloses the method defined in claim 3, the model training server receives the regeneration notice (paragraph 31, line(s) 8-12 "The user application is coupled for communication with the respective computing device 106 or the application server 108 to receive, send or present messages, status, commands and other information."; also, paragraph 32, line(s) 1-7 "the memory 218 may include the cartoon generation module 116o and the classifier creation module 118. The cartoon generation module 116o comprises a user interface module 202, one or more classifiers 204, a cartoon asset module 206, and a rendering module 208. The classifier creation module 118 comprises a training module 222"), and extracts corresponding a plurality of image files from the database server based on those model parameters corresponding to each label parameter group (paragraph 27, line(s) 1-11 "The memory 218 may store and provide access to data to the other components of the application server 108. In some implementations, the memory 218 may store instructions and/or data that may be executed by the processor 216. The memory 218 is also capable of storing other instructions and data, including, for example, an operating system, hardware drivers, other software applications, databases, etc. The memory 218 may be coupled to the bus 214 for communication with the processor 216, the communication unit 220, the data store 222 or the other components of the application server 108."; also, (paragraph 66, line(s) 24-29 "The method 500 then crowd sources 512 the clean smaller set of images to produce positive images (tuples of image, attribute, label). In some implementations, this can be done by a dedicated group of human raters rather than crowdsourcing."; also, paragraph 67, line(s) 1-19 "the method 600 continues to block 606 and processes image with a shallow neural network model 228 as a classifier 204 to generate one or more attribute(s) and label(s). One implementation of this process 606 is described in more detail below with reference to FIG. 7. Then the method 600 determines 608 the cartoon assets corresponding to the attribute and label. For example, the cartoon asset module 206 may search the cartoon assets for particular assets that match the attribute and label provided by the block 606. Once the cartoon assets corresponding to the attribute and label have been identified"), the model training server uses the deep-learning text-to-image diffusion model to generate those avatar models in real time, after packing, those avatar models are transmitted to the application program, the application program receives those avatar models, unpacks them, and shows them on the displaying screen for the user to select (paragraph 7, line(s) 23-24"presenting the cartoon avatar on a display of a user device"; also, paragraph 31, line(s) 8-12 "The user application is coupled for communication with the respective computing device 106 or the application server 108 to receive, send or present messages, status, commands and other information."). Sarna does not disclose the use of a deep-learning text-to-image diffusion model to generate those avatar models.
However, in a similar field of endeavor, Wang discloses the use of a stable diffuse model which is a deep-learning text-to-image generation model used to generate high quality images (attached foreign patent with machine translation, paragraph 43, line(s) 1-2 "a Diffusion model to generate a character model, wherein the Diffusion model is a deep learning text-to-image generation model for generating high-quality images.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Sarna's invention of the method defined in claim 3, the model training server that receives the regeneration notice, extracts corresponding a plurality of image files from the database server based on those model parameters corresponding to each label parameter group, and such avatar model that are transmitted to the application program where the application program unpacks said model and shows them on the displaying screen of the user to select with the features of a stable diffuse model which is a deep-learning text-to-image generation model used to generate high quality images. As demonstrated by Wang, one could specify the usage of a diffusion model as the framework or architecture that uses the deep neural network to generate various avatar models.
Regarding claim 6, Sarna as modified by Wang discloses the method defined in claim 5, the method to generate those image files includes, the model training server extracting a classifying label list from the database server, based on the text contents of those classifying labels (Sarna: Figure 5, 506; also, paragraph 27, line(s) 5-8 "The memory 218 is also capable of storing other instructions and data, including, for example, an operating system, hardware drivers, other software applications, databases, etc."; also, paragraph 40, line(s) 1-4 "The cartoon asset module 206 may be steps, processes, functionalities or a device including routines for storing, managing, and providing access to the individual sets of cartoon assets."; also, paragraph 66, line(s) 3 "The method 500 accesses 502 a large image data set"; also, paragraph 66, line(s) 10-12 "In block 506, the method 500 uses the images to train a deep neural network 226"; also, paragraph 66, line(s) 18-20 "method 500 then searches 508 for a smaller set of images that include faces (classifier for human faces) from the large image data set"), using the deep-learning text-to-image diffusion model to generate those image files, relating those image files to those classifying labels, and saving them to the database server (paragraph 26, line(s) 15-20 "the processor 216 may be coupled to the memory 218 via the bus 214 to access data and instructions therefrom and store data therein. The bus 214 may couple the processor 216 to the other components of the computing device 106 including, for example, the memory 218, communication unit 220, and the data store 222."; also, paragraph 27, line(s) 1-8 "The memory 218 may store and provide access to data to the other components of the application server 108. In some implementations, the memory 218 may store instructions and/or data that may be executed by the processor 216. The memory 218 is also capable of storing other instructions and data, including, for example, an operating system, hardware drivers, other software applications, databases, etc."; also, paragraph 38, line(s) 1-4 "the set of classifiers 204 for classifying photos of users into the above attributes are combined into a pipeline. A photo or image may be provided to the pipeline as input"). Sarna does not disclose the use of a deep-learning text-to-image diffusion model to generate those avatar models.
However, in a similar field of endeavor, Wang discloses the use of a stable diffuse model which is a deep-learning text-to-image generation model used to generate high quality images (attached foreign patent with machine translation, paragraph 43, line(s) 1-2 "a Diffusion model to generate a character model, wherein the Diffusion model is a deep learning text-to-image generation model for generating high-quality images.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Sarna's invention of the method defined in claim 5, the method to generate those image files includes, the model training server extracting a classifying label list from the database server, based on the text contents of those classifying labels, and relating those image files to those classifying labels, and saving them to the database server with the features of a stable diffuse model which is a deep-learning text-to-image generation model used to generate high quality images. As demonstrated by Wang, one could specify the usage of a diffusion model as the framework or architecture that uses the deep neural network to generate various avatar models.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sarna et al. (U.S. Pub. No. 20200175740) in view of Donnell et al. (U.S. Pub. No. 2024/0029330).
Regarding claim 8, Sarna discloses the method defined in claim 1, those model parameters are analyzed by a natural language to eliminate the unreasonable group. Sarna does not disclose the use of a model parameters are analyzed by a natural language.
However, in a similar field of endeavor, Donnell discloses the use of a model parameters are analyzed by a natural language to eliminate the unreasonable group (paragraph 31, line(s) 1-8 "language processing module and/or diagnostic engine may generate the language processing model by any suitable method, including without limitation a natural language processing classification algorithm; language processing model may include a natural language process classification model that enumerates and/or derives statistical relationships between input terms and output terms").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified Sarna's invention of the method defined in claim 1 with the feature where a natural language processing classification model that enumerates and/or derives statistical relationships between input terms and output terms. As demonstrated by Donnell , one could specify the usage of a natural language processing model to analyze said model parameters to further narrow down the groups.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAI WEI TOMMY LI whose telephone number is (571)272-1170. The examiner can normally be reached 6:00AM-4:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAI LI/Junior Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613