Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Non-Final Rejection is fielded in response to Request for Continued Examination (RCE) filed 02/23/2026.
Claims 1, 11, and 18 are amended.
Claims 1-20 remain pending.
Response to Arguments
Argument 1, Applicant argues in Applicant Argument/Remarks Made in an Amendment filed 02/23/2026, pg. 1-3, that prior art fails to teach the primary claim limitations, “wherein the representation of the output message is an alphanumeric representation formatted based on an expected communication protocol between the imaging device and the third-party computing device”
Response to Argument 1, applicant’s argument have been considered, however in light of the amendments a grounds of combination or prior art (U.S. Patent Application Publication No 20170223225 “Kaneda”) is applied to updated rejections.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3, 5, 7, 9-13, 15, 17-19 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. Patent Application Publication NO. 20170223225 “Kaneda”.
Claim 1:
Kaneda teaches a method for operating a machine vision system, the machine vision system including a computing device for executing an application (i.e. para. [0021], “The information processing device 105 includes a printer driver, as a print application, for the image forming device 101. The information processing device 105 uses the printer driver to generate image data as a print target, including image data written in a Page Description Language (PDL), and transmits the image data to the image forming device 101”, wherein the BRI for a machine vision system encompasses an external device that generates image data in a first PDL format, and the BRI for a computing device encompasses the connected image forming device that receives the image data. Wherein the BRI for an application encompasses the print application for converting an image to an external device) and an imaging device communicatively coupled to the computing device, the imaging device being operable to communicate with a third-party computing device (i.e. para. [0023], “the information processing device 105 uses the printer driver to transmit the image data to the image forming device 101. The tablet 103 includes a print application APP-A, and uses the print application APP-A to transmit the image data to the image forming device 101. The smartphone 121 transmits the image data to the image forming device 101 through wireless communications via the conversion server 111. The conversion server 111 includes a print application APP-B. When the smartphone 121 transmits the image data to the conversion server 111 through the wireless communications, the conversion server 111 uses the print application APP-B to transmit (transfer) the image data to the image forming device)”, wherein the BRI for a third-party computing device encompasses how the various image generators may transmit data to a printer device, after formatting said data), the method comprising:
configuring, via the application, a machine vision job (i.e. para. [0048], the analysis unit of the RIP 208 analyzes the PDL data stored by the control unit 301, and generates, based on the result of the analysis intermediate data of a format that can be rasterized by the rendering unit), the configuring the machine vision job including:
configuring at least one tool to be executed by the imaging device during an execution of the job (i.e. para. [0048], “When the result of the analysis indicates that the PDL data includes the type information about the sheet, the analysis unit notifies the control unit 301 of the type information” wherein the BRI for one tool encompasses configuring the rendering unit to rasterize PDL data into a certain format); configuring an output data stream based on the at least one tool, the output data stream being formatted for communication to the third-party computing device (i.e. para. [0048], “The rendering unit then rasterizes the generated intermediate data to generate the raster image, and the control unit 301 stores the generated raster image”, wherein the BRI for an output data stream encompasses the rasterized PDL data to be sent to an external device); and displaying, via the application, a representation of an output message, the representation of the output message being formed based on the configuring the output data stream, the representation of the output message being a representation of a transmission of a payload message from the imaging device to the third-party computing device (i.e. para. [0051], “The determination is described with reference to Table 1 in FIG. 8. Here, the setting unit functions as a controller that at least performs control to determine whether to display a preview image and to receive an image rotation instruction as described below”, wherein the BRI for an output message encompasses a preview of an image to be displayed based on the determined data format of the external device); transmitting, from the computing device to the imaging device, the machine vision job (i.e. para. [0077], The control unit 301 transmits the raster image after the rotation and the information about the sheet feeder selected in step S4003, to the printer); and executing the machine vision job on the imaging device, wherein, the executing the machine vision job includes transmitting the payload message from the imaging device to the third-party computing device (i.e. para. [0036], The image processing circuit 209 executes image processing on the raster image. The image processing includes color conversion processing, rotation processing, reduction processing, and gamma correction. The raster image subjected to the image processing is transmitted to the printer 212 ); wherein the representation of the output message is an alphanumeric representation (i.e. para. [0061], “Each of the images 541 to 544 includes a letter “F””, wherein the output preview has a letter F representing a format for the determined communication protocol) formatted based on an expected communication protocol between the imaging device and the third-party computing device (i.e. para. [0054], “When the communication protocol is LPR or RAW, the setting unit determines not to set the top of the image based on the user instruction due to the following reason. When the communication protocol is LPR or RAW, the print application that generated the PDL data is presumably the printer driver, and the PDL data transmitted from the printer driver presumably includes the top-bottom information appropriately set by the user, which means the included top-bottom information has high reliability”, wherein it is noted that an alphanumeric orientation for the letter “F” of the preview image may be output based on the expected data format of the device that sent the image data).
Claim 3:
Kaneda teaches the method of claim 1,
wherein the displaying the representation of the output message occurs in response to the configuring the output data stream (i.e. para. [0051], The determination is described with reference to Table 1 in FIG. 8. Here, the setting unit functions as a controller that at least performs control to determine whether to display a preview image and to receive an image rotation instruction as described below .
Claim 7:
Kaneda teaches the method of claim 1.
Kaneda further teaches wherein displaying, via the application, a representation of an output message further comprises:
displaying the representation of the output message in an entry mode by adding a header comprising metadata to the output message (i.e. para. [0064], “the image 545 indicates that an envelope is placed on the manual feed tray with the flap disposed on the upper (rear) side”, wherein it is noted in Fig. 5 that the BRI for a header comprising metadata encompasses how the expected image may be an envelope or a A4 or A3 sheet of paper).
Claim 9:
Kaneda teaches the method of claim 1, wherein the configuring the output data stream based on the at least one tool includes executing each of the at least one tool with a respective input data set to receive a corresponding output data set (i.e. para. [0039], The receiving unit also determines the communication protocol used when the PDL data is received and the data format of the PDL data, and stores information indicating the protocol and the data format. Examples of the communication protocol include six types of protocols: Line Printer Remote (LPR), RAW, Internet Printing Protocol (IPP), Hypertext Transfer Protocol (HTTP), Extensible Messaging and Presence Protocol (XMPP), and Web Services For Devices (WSD) illustrated in Table 1 in FIG. 8. The data format includes, as examples, five types of formats: Point Cloud Library (PCL), PostScript, Portable Document Format (PDF), Printer Working Group (PWG) raster, and Extensive Markup Language Paper Specification (XPS), which are also illustrated in Table 1).
Claim 10:
Kaneda teaches the method of claim 1, wherein the representation of the output message is formed further based on a prior image (i.e. para. [0050], “he specific sheet type is a sheet that includes additional limitations. More specifically, the sheet is of the specific type when the orientation of the image to be printed is relatively limited with respect to the orientation of sheet being fed in printing. For example, the specific sheet type indicates an envelope and a postcard”, wherein the BRI for a prior image how the preview is further modified by the prior pre-determined type of paper sheet shape).
Claim 11:
Kaneda teaches a machine vision system comprising:
a computing device for executing an application, the application operable to configure a machine vision job (i.e. para. [0021], “The information processing device 105 includes a printer driver, as a print application, for the image forming device 101. The information processing device 105 uses the printer driver to generate image data as a print target, including image data written in a Page Description Language (PDL), and transmits the image data to the image forming device 101”, wherein the BRI for a machine vision system encompasses an external device that generates image data in a first PDL format, and the BRI for a computing device encompasses the connected image forming device that receives the image data. Wherein the BRI for an application encompasses the print application for converting an image to an external device), wherein configuring the machine vision job includes:
configuring at least one tool to be executed by the imaging device during an execution of the job (i.e. para. [0048], “When the result of the analysis indicates that the PDL data includes the type information about the sheet, the analysis unit notifies the control unit 301 of the type information” wherein the BRI for one tool encompasses configuring the rendering unit to rasterize PDL data into a certain format);
configuring an output data stream based on the at least one tool, the output data stream being formatted for communication to a third-party computing device (i.e. para. [0048], “The rendering unit then rasterizes the generated intermediate data to generate the raster image, and the control unit 301 stores the generated raster image”, wherein the BRI for an output data stream encompasses the rasterized PDL data to be sent to an external device); and
displaying, via the application, a representation of an output message, the representation of output message being formed based on the configuring the output data stream, the output message being a representation of a transmission of a payload message from the imaging device to the third-party computing device (i.e. para. [0051], “The determination is described with reference to Table 1 in FIG. 8. Here, the setting unit functions as a controller that at least performs control to determine whether to display a preview image and to receive an image rotation instruction as described below”, wherein the BRI for an output message encompasses a preview of an image to be displayed based on the determined data format of the external device), wherein the displayed representation of the output message is formed further based on at least one of: (i) user-entered data (i.e. para. [0051], S4005, the setting unit of the control unit 301 determines, based on the user instruction, whether to set the top of the image to be printed) (ii) prior image data (i.e. para. [0042], The type information indicates the type of a sheet on which the image is printed, e.g., an A4 size normal paper, an A4 size coated paper, an A3 size normal paper, an envelope, or a postcard), or (iii) default data (i.e. para. [0049], If the PDL data includes no type information, the setting unit selects a predetermined sheet feed cassette, e.g., the cassette 1, as the sheet feeder);
the application being further operable to cause the computing device to transmit the machine vision job to an imaging device (i.e. para. [0077], The control unit 301 transmits the raster image after the rotation and the information about the sheet feeder selected in step S4003, to the printer); and
the imaging device configured to receive the machine vision job and to execute the machine vision job which includes transmitting the payload message from the imaging device to the third-party computing device (i.e. para. [0036], The image processing circuit 209 executes image processing on the raster image. The image processing includes color conversion processing, rotation processing, reduction processing, and gamma correction. The raster image subjected to the image processing is transmitted to the printer 212 );
wherein the representation of the output message is an alphanumeric representation (i.e. para. [0061], “Each of the images 541 to 544 includes a letter “F””, wherein the output preview has a letter F representing a format for the determined communication protocol) formatted based on an expected communication protocol between the imaging device and the third-party computing device (i.e. para. [0054], “When the communication protocol is LPR or RAW, the setting unit determines not to set the top of the image based on the user instruction due to the following reason. When the communication protocol is LPR or RAW, the print application that generated the PDL data is presumably the printer driver, and the PDL data transmitted from the printer driver presumably includes the top-bottom information appropriately set by the user, which means the included top-bottom information has high reliability”, wherein it is noted that an alphanumeric orientation for the letter “F” of the preview image may be output based on the expected data format of the device that sent the image data).
Claim 12:
Kaneda teaches the system of claim 11,
wherein configuring the machine vision job further includes forming the displayed representation of the output message further based on the user-entered data (i.e. para. [0051], S4005, the setting unit of the control unit 301 determines, based on the user instruction, whether to set the top of the image to be printed).
Claim 13:
Kaneda teaches the system of claim 11.
wherein configuring the machine vision job further includes forming the displayed representation of the output message further based on the prior image data (i.e. para. [0042], The type information indicates the type of a sheet on which the image is printed, e.g., an A4 size normal paper, an A4 size coated paper, an A3 size normal paper, an envelope, or a postcard).
Claim 15:
Kaneda teaches the system of claim 11,
wherein the displaying the representation of the output message occurs in response to the configuring the output data stream (i.e. para. [0052], “. In another embodiment, a reduced image 630-2 is displayed in a rotated state based on the initial value, as illustrated in a screen 630-1 in FIG. 7. The screen 630-1 is displayed on the operation unit 211 in place of the UI screen 630 in FIG. 6. The image 630-2 indicates the top of the image set by the user pressing the “OK” button. This indicates that the upper side of the image 630-2 being displayed is set as the top of the image).
Claim 17:
Claim 17 is the system claim reciting similar limitations to Claim 7 and is rejected for similar reasons
Claim 18:
Kaneda teaches a machine vision system comprising:
a computing device for executing an application, the application operable to configure a machine vision job(i.e. para. [0021], “The information processing device 105 includes a printer driver, as a print application, for the image forming device 101. The information processing device 105 uses the printer driver to generate image data as a print target, including image data written in a Page Description Language (PDL), and transmits the image data to the image forming device 101”, wherein the BRI for a machine vision system encompasses an external device that generates image data in a first PDL format, and the BRI for a computing device encompasses the connected image forming device that receives the image data. Wherein the BRI for an application encompasses the print application for converting an image to an external device) , and the application further operable to display a representation of an input message (i.e. para. [0023], “the information processing device 105 uses the printer driver to transmit the image data to the image forming device 101. The tablet 103 includes a print application APP-A, and uses the print application APP-A to transmit the image data to the image forming device 101. The smartphone 121 transmits the image data to the image forming device 101 through wireless communications via the conversion server 111. The conversion server 111 includes a print application APP-B. When the smartphone 121 transmits the image data to the conversion server 111 through the wireless communications, the conversion server 111 uses the print application APP-B to transmit (transfer) the image data to the image forming device)”, wherein the BRI for an input message encompasses the display of a formatted image) by:
configuring, via the application, a machine vision job, the configuring the machine vision job including configuring at least one tool to be executed by an imaging device during an execution of the job (i.e. para. [0048], “When the result of the analysis indicates that the PDL data includes the type information about the sheet, the analysis unit notifies the control unit 301 of the type information” wherein the BRI for one tool encompasses configuring the rendering unit to rasterize PDL data into a certain format);
receiving, from a third-party computing device, a desired output of the machine vision job; determining, via the application, the representation of the input message based on: (i) the configured machine vision job (i.e. para. [0048], the analysis unit of the RIP 208 analyzes the PDL data stored by the control unit 301, and generates, based on the result of the analysis intermediate data of a format that can be rasterized by the rendering unit), and (ii) the desired output of the machine vision job; displaying, via the application, the determined representation of the input message (i.e. para. [0060], FIG. 5 illustrates screens 540 and 550 displayed on the display unit by the UI control unit. The screen 540 is used for setting the initial value of the top of an image when the image is printed on an envelope. Similarly, the screen 550 is used to set the initial value of the top of an image when the image is printed on an A4 size normal paper); and
displaying, via the application, a representation of an output message, the output message being a representation of a transmission of a payload message from the imaging device to the third-party computing device(i.e. para. [0051], “The determination is described with reference to Table 1 in FIG. 8. Here, the setting unit functions as a controller that at least performs control to determine whether to display a preview image and to receive an image rotation instruction as described below”, wherein the BRI for an output message encompasses a preview of an image to be displayed based on the determined data format of the external device);
wherein the representation of the output message is an alphanumeric representation (i.e. para. [0061], “Each of the images 541 to 544 includes a letter “F””, wherein the output preview has a letter F representing a format for the determined communication protocol) formatted based on an expected communication protocol between the imaging device and the third-party computing device (i.e. para. [0054], “When the communication protocol is LPR or RAW, the setting unit determines not to set the top of the image based on the user instruction due to the following reason. When the communication protocol is LPR or RAW, the print application that generated the PDL data is presumably the printer driver, and the PDL data transmitted from the printer driver presumably includes the top-bottom information appropriately set by the user, which means the included top-bottom information has high reliability”, wherein it is noted that an alphanumeric orientation for the letter “F” of the preview image may be output based on the expected data format of the device that sent the image data).
Claim 19:
Kaneda teaches the system of claim 18, further comprising the imaging device,
Wherein the imaging device is configured to receive the machine vision job and to execute the machine vision job which includes transmitting the payload message from the imaging device to the third-party computing device (i.e. para. [0036], The image processing circuit 209 executes image processing on the raster image. The image processing includes color conversion processing, rotation processing, reduction processing, and gamma correction. The raster image subjected to the image processing is transmitted to the printer 212 );
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20170223225 “Kaneda” and further in view of U.S. Patent Application Publication NO. 20080088590 “Brown”.
Claim 2:
Kaneda teaches the method of claim 1.
While Kaneda teaches displaying an alphanumeric representation of the output message, Kaneda may not explicitly teach
wherein the representation of the output message is a binary representation of the output data stream. However, Brown teaches
wherein the representation of the output message is a binary representation of the output data stream. (i.e. para. [0086], Each of these configurations is stored in a digital data format, such as in a string of hexadecimal characters for each image of each display of a configuration. Each image data string can be configured in a manner where parsing delineates separate image data strings for different images of a layout configuration).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add wherein the representation of the output message is a binary representation of the output, to Tremblay-Imaoka’s data stream, with wherein the representation of the output message is a binary representation of the output result as taught by Uesawa. One would have been motivated to combine Uesawa with Imaoka-Tremblay, and would have had a reasonable expectation of success in doing so, as the combination provides more convenience to a user by saving on display space and verify the file structure.
Claim 14:
Claim 14 is the system claim reciting similar limitations to Claim 2 and is rejected for similar reasons.
Claim(s) 4, 16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20170223225 “Kaneda” and further in view of U.S. Patent Application Publication NO. 20100045863 “Park”.
Claim 4:
Kaneda teaches the method of claim 1.
While Kaneda teaches wherein the third-party computing device is a computer, Kaneda may not explicitly teach wherein the third-party computing device is a programmable logic controller (PLC).
However, Park teaches wherein the third-party computing device is a programmable logic controller (PLC) (i.e. para. [0069], “image display system also includes the PLC carriers 120, 220, and 320 for transmitting/receiving a signal between the host image processor 100 and the sub image processor 200”, wherein a host image processor may send image data to a PLC 320 of an external image data generator 300).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add wherein the third-party computing device is a programmable logic controller (PLC), to Kaneda’s machine vision system, with wherein the third-party computing device is a programmable logic controller (PLC) as taught by Park. One would have been motivated to combine Park with Kaneda, and would have had a reasonable expectation of success in doing so, because the combination provides more user convenience by increasing the number of compatible devices for information transfer.
Claim 16:
Claim 16 is the system claim reciting similar limitations to Claim 4 and is rejected for similar reasons.
Claim 20:
Claim 20 is the system claim reciting similar limitations to Claim 4 and is rejected for similar reasons.
Claim(s) 6 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication NO. 20170223225 “Kaneda”. and further in view of U.S. Patent Application Publication NO. 20180183952 “Imaoka”.
Claim 6:
Kaneda teaches the method of claim 5.
While Kaneda teaches configuring an output data stream, however, Kaneda may not explicitly teach
wherein configuring an output data stream based on the at least one tool further comprises:
displaying a size field for each field of the plurality of fields;
receiving, from a user, an input for at least one of the size fields; and configuring the output data stream further based on the received input.
However Imaoka teaches,
wherein configuring an output data stream based on the at least one tool further comprises: displaying a size field for each field of the plurality of fields (i.e. para. [0043], “The icons d7 to d13 respectively correspond to a plurality of configurable detailed settings pertaining to the function selected in the function selection area R1”, wherein it is noted that each of the icons d7 to d13 have a displayed size field that may be selected by a user); receiving, from a user, an input for at least one of the size fields (i.e. para. [0043], The setting selection area R2 is an area for receiving settings related to functions selected in the function selection area R1); and configuring the output data stream further based on the received input (i.e. para. [0052], The display controller 105 also displays a sample image p3 in the preview display area R3, as illustrated in FIG. 6B. The sample image p3 is a processed image on which the copy function with the setting value of “2 in 1” has been executed).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add wherein configuring an output data stream based on the at least one tool further comprises: displaying a size field for each field of the plurality of fields; receiving, from a user, an input for at least one of the size fields; and configuring the output data stream further based on the received input, to Kaneda’s machine vision system, with the size may be further modified, as taught by Imaoka. One would have been motivated to combine Imaoka with Kaneda, and would have had a reasonable expectation of success in doing so, as the combination provides more flexibility to a user by providing an intuitive way send and receive processed images.
Claim 8:
Kaneda teaches the method of claim 1.
While Kaneda teaches displaying a representation of an output message, Kaneda may not explicitly teach wherein displaying, via the application, a representation of an output message further comprises: displaying the representation of the output message in a raw data mode by displaying raw data of the output message, and not adding a header comprising metadata to the output message.
However, Imaoka teaches
displaying the representation of the output message in a raw data mode by displaying raw data of the output message, and not adding a header comprising metadata to the output message (i.e. para. [0052], Fig. 6A-B, “The display controller 105 also displays a sample image p3 in the preview display area R3, as illustrated in FIG. 6B. The sample image p3 is a processed image on which the copy function with the setting value of “2 in 1” has been executed”, wherein it is noted that a header is not added to the representation of the output message).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to add displaying the representation of the output message in a raw data mode by displaying raw data of the output message, and not adding a header comprising metadata to the output message, to Kaneda’s machine vision system, with displaying raw data, as taught by Imaoka. One would have been motivated to combine Imaoka with Kaneda, and would have had a reasonable expectation of success in doing so, as the combination provides more flexibility to a user by providing an intuitive way send and receive processed images.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Patent Application Publication NO. 20150371422 “Kokemohr”, teaches in para. [0076] previewed modifications made by the user manipulating the filters in the menu 406 to be applied to image 402, e.g., causing a new image to be stored in an accessible storage device.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TAN whose telephone number is (571)272-7433. The examiner can normally be reached M-F 7:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D.T./ Examiner, Art Unit 2145
/CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145