Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is responsive to patent application as filed on 11/7/2023
This action is made Non-Final.
Claims 1 – 20 are pending in the case. Claims 1 and 16 are independent claims.
Drawings
The drawings filed on 11/7/2023 have been accepted by the Examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3, 7-8, 10-13, 16 and 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claims 1 and 16 recite at least “determining edges of panels within respective sheets of the graphic narrative; segmenting elements within the panels; applying the segmented elements to a first machine learning (ML) model to predict a narrative flow, the predicted narrative flow comprising an order in which the panels are to be viewed; and assigning, in accordance with the predicted narrative flow, index values to the respective panels, the index values representing positions in an ordered list that corresponds to the predicted narrative flow”. These limitations are construed as abstract ideas for being performable in the human mind and/or on paper. A human can certainly observe and determine edges panels of sheets of a graphic narrative, segment elements within the panels, predict a narrative flow based on the segmentation of elements within the panels, and assign index values to the respective panels, especially if the graphic narrative is on paper.
This judicial exception is not integrated into a practical application because the additional limitations of “a processor” and “memory” (from claim 16; no additional limitations in claim 1) are merely generic computing components on which the instructions to implement the abstract idea are applied. Additional limitations directed toward mere instructions to apply the exception to generic computing components, alone or in combination, do not integrate the judicial exception into a practical application (See MPEP§2106.05(f)).
As per using ML technology for data processing limitations, said steps are nothing more than an attempt to recycle preexisting artificial intelligence or machine-learning (AI/ML) technologies to apply for narrative flow prediction. There are no improvements in said ML techniques, such as advances in the field of computer science itself, or designing a new neural network, and there is no controlling of a technological process using the outcome of said AI/ML operations.
Further, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually; there is no indication that the combination of elements improves the functioning of a computer or improves any other technology including AI/ML technology, - their collective functions merely provide conventional computer implementation. None of the additional elements "offers a meaningful limitation beyond generally linking 'the use of the [method] to a particular technological environment,' that is, implementation via computers." Alice Corp., slip op. at 16 (citing Bilski v. Kappos, 561 U.S. 610, 611 (U.S. 2010)).
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements identified above, being directed toward mere instructions to apply the exception to generic computing components, alone or in combination, are well-understood routine and conventional, do not provide an inventive concept, and thus, do not amount to significantly more than the judicial exception. Therefore, independent claims 1 and 16 are directed toward ineligible subject matter.
This judicial exception is not integrated into a practical application because claim 13 does not recite any additional limitations on top of the identified abstract idea(s), and therefore, do not integrate the judicial exception into a practical application (See MPEP§2106.05).
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the as noted, there are no additional elements, and thus, do not amount to significantly more than the judicial exception. Therefore, independent claim 13 is directed toward ineligible subject matter.
Dependent claims 3, 7, 8, 10-13, 16 and 19-20 recite additional limitations that are also construed as additional abstract ideas (claims 3, 12, 13), mere instructions to apply the judicial exception to generic computing components (claims 7, 8, 10, 19, 20), or insignificant extra solution activity (claims 11), and are, therefore, also directed toward ineligible subject matter.
The analysis of dependent claims 3, 7, 8, 10-13, 16 and 19-20 has resulted in the determination that these claims recite eligible subject matter.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 12, 17 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2 and 17 recite “among the text elements a same page” which seems to be a typographical error resulting in unclear scope of limitation due to what seems to be a word or several words missing, as well as having improper antecedent basis as “a same page” is used twice in the claim. Claim 12 recites “a second ML model” but then later fails to attribute the newly introduced ML model in the claim to any features of the claim, instead reciting “the first ML model”. It seems the claim includes a typographical error and that the claim is meant to reference the newly introduced “second ML model” rather than the “first ML model” and has been interpreted as such. Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 3-9, 12, 13, 16, 18-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sugaya (USPUB 20210073458 A1).
Claim 1:
Sugaya discloses A method of modifying a graphic narrative, comprising: determining edges of panels within respective sheets of the graphic narrative (0060: the control unit 10 calculates a size of each bin included in the comic capture data); segmenting elements within the panels (0042: characters corresponding to a character image or a pattern image which is stored as the image template is stored as text data. For example, if the image data of the character “Boom” is stored as the image template, the text data of “Boom” is stored as the text template); applying the segmented elements to a first machine learning (ML) model to predict a narrative flow, the predicted narrative flow comprising an order in which the panels are to be viewed (Figs 5A, 5B and 0051-52: the control unit 10 reads the bin order database 31 stored in the storage unit 30, predicts the correct order of multiple bins included in the comic capture data based on a bin order learned in the past, and cause the screen display unit 50 to display the multiple bins in sequence according to the predicted order... the control unit 10 reads the bin order database 31 stored in the storage unit 30, performs image recognition on the comic capture data acquired in step S10, and searches the closest image template from the image templates stored in the bin order database); and assigning, in accordance with the predicted narrative flow, index values to the respective panels, the index values representing positions in an ordered list that corresponds to the predicted narrative flow (Figs 5A, 5B and 0051-55: the control unit 10 reads the bin order database 31 stored in the storage unit 30, predicts the correct order of multiple bins included in the comic capture data based on a bin order learned in the past, and cause the screen display unit 50 to display the multiple bins in sequence according to the predicted order... the control unit 10 determines that the image template corresponding to the bin order identifier “1” is the closest image template... the control unit 10 reads a bin order template corresponding to the determined closest image template, and allocates a bin order corresponding to the read bin order template to the comic capture data acquired in step S10. FIG. 5 (B) shows an example of the comic capture data after the bin order is allocated... the control unit 10 extracts the bins one by one from the comic capture data in the allocated bin order, and displays them in the image display unit).
Claim 3:
Sugaya discloses applying the segmented elements to the first ML model to predict the narrative flow further comprises predicting a flow of action within one or more of the panels (Figs 5A, 5B and 0051-52: the control unit 10 reads the bin order database 31 stored in the storage unit 30, predicts the correct order of multiple bins included in the comic capture data based on a bin order learned in the past, and cause the screen display unit 50 to display the multiple bins in sequence according to the predicted order... the control unit 10 reads the bin order database 31 stored in the storage unit 30, performs image recognition on the comic capture data acquired in step S10, and searches the closest image template from the image templates stored in the bin order database).
Claim 4:
Sugaya discloses the one or more of the panels have larger area than an average area of the panels, and the method further comprises displaying the graphic narrative in a digital format by showing multiple views for each panel of the one or more of the panels, such that a first view of the multiple views for a panel shows a part of the panel corresponding to a first occurrence in the flow of the action and a second view of the multiple views shows a part of the panel corresponding to a second occurrence, wherein the first occurrence proceeds the second occurrence in the flow of the action (Figs 5A, 5B, 6A, 6B, 7A and 7B: panels of varying sizes are displayed, some being larger than average, with panels being presented on an electronic device in a plurality of views, for example in Figs 7A and 7B one view of the panels is before enlargement with another being after enlargement).
Claim 5:
Sugaya discloses the displaying the graphic narrative in the digital format further comprises transitioning from the first view of the panel to the second view of the panel by zooming, panning, changing a focus, highlighting, or fading from the first view of the panel to the second view of the panel (Figs 5A, 5B, 6A, 6B, 7A and 7B: panels of varying sizes are displayed, some being larger than average, with panels being presented on an electronic device in a plurality of views, for example in Figs 7A and 7B one view of the panels is before enlargement with another being after enlargement).
Claim 6:
Sugaya discloses determining scores representing uncertainties for the index values of the respective panels within the ordered list; flagging panels for which the scores of the flagged panels exceed a predefined threshold; sending, to a user device, the panels associated with the corresponding index values and indicia of which of the panels have been flagged; and modifying the predicted narrative flow based on user interface inputs indicating corrections to the predicted narrative flow (0057: the order of bins displayed by the image display unit 50 may be incorrect. Therefore, the user may confirm the order of the bins displayed on the screen display unit 50, and corrects the order by manual replacement if necessary. The control unit 10 may perform replacement of the bin order according to the correction instruction operation. If the order is corrected, the corrected order is determined as the order of bins).
Claim 7:
Sugaya discloses ingesting the graphic narrative; slicing the graphic narrative into respective pages and determining an order of the pages; determining that panels on a page earlier in the order of the pages occur earlier in the predicted narrative flow than panels on a page that is later in the order of the pages; and displaying, on a display of a device, the panels according to the predicted order in which the panels are to be viewed (Figs 6A, 6B, 0054-56: the control unit 10 reads a bin order template corresponding to the determined closest image template, and allocates a bin order corresponding to the read bin order template to the comic capture data... the control unit 10 extracts the bins one by one from the comic capture data in the allocated bin order, and displays them in the image display unit... Comics are displayed in the image display unit 50 in an order corresponding to the bin order template. In FIG. 6 (A), a line inserted between bins represents a page break).
Claim 8:
Sugaya discloses displaying, on a display of a device, the panels according to the predicted order in which the panels are to be viewed, and rendering the panels in accordance with a screen size of the display of the device, wherein the device is an electronic reading device, a tablet or a smartphone on which is running electronic reading application; a website accessed via a web browser; or printing a copy of an electronic version of the graphic narrative (0060-62: magnification may be changed according to the size of the screen to display the size of the bin. For example, the control unit 10 calculates a size of each bin included in the comic capture data, and disposes the each bin in a predetermined area of the storage unit 30. Moreover, the control unit 10 reads the image information database 33, and calculates, for each bin, the magnification that may be displayed by the image display unit 50 and may be displayed as large as possible. Furthermore, the control unit 10 enlarges and displays the each bin included in the comic capture data at the calculated magnification... in a case where the bin is small and difficult to see, the size of the bin may be changed such that the bin fits on the screen, as shown in FIG. 7 (B). With such configuration, it is possible to construct a comic data display system 1 in which a user is able to easily and visually recognize the contents even in a terminal with a small screen such as a smart phone... the bins are displayed in one column on each screen, but the bins may also be configured to be displayed in two or more columns according to the size of the terminal screen. For example, a smartphone may be set to display the bins in one column, and a tablet terminal having a screen of 10 inches or more may be set to display the bins in two columns. With such configuration, it is possible to achieve an excellent comic data display system that is easy to use while fully exerting the terminal's screen display performance).
Claim 9:
Sugaya discloses modifying the panels to increase a uniformity of a size and/or shape of the panels, such that the modified panels are compatible with being displayed as an electronic version of the graphic narrative (0060-62: magnification may be changed according to the size of the screen to display the size of the bin. For example, the control unit 10 calculates a size of each bin included in the comic capture data, and disposes the each bin in a predetermined area of the storage unit 30. Moreover, the control unit 10 reads the image information database 33, and calculates, for each bin, the magnification that may be displayed by the image display unit 50 and may be displayed as large as possible. Furthermore, the control unit 10 enlarges and displays the each bin included in the comic capture data at the calculated magnification... in a case where the bin is small and difficult to see, the size of the bin may be changed such that the bin fits on the screen, as shown in FIG. 7 (B). With such configuration, it is possible to construct a comic data display system 1 in which a user is able to easily and visually recognize the contents even in a terminal with a small screen such as a smart phone... the bins are displayed in one column on each screen, but the bins may also be configured to be displayed in two or more columns according to the size of the terminal screen. For example, a smartphone may be set to display the bins in one column, and a tablet terminal having a screen of 10 inches or more may be set to display the bins in two columns. With such configuration, it is possible to achieve an excellent comic data display system that is easy to use while fully exerting the terminal's screen display performance).
Claim 12:
Sugaya discloses segmenting elements within the panels further comprises: applying a second ML model to a panel of the panels, the first ML model determining bounded regions within the panel that correspond to background, foreground, text bubbles, objects, and/or characters, and identifying the bounded regions as the segmented elements (0042: characters corresponding to a character image or a pattern image which is stored as the image template is stored as text data. For example, if the image data of the character “Boom” is stored as the image template, the text data of “Boom” is stored as the text template).
Claim 13:
Sugaya discloses segmenting the elements within the panels is performed using a semantic segmentation method that is selected from the group consisting of a Fully Convolutional Network (FCN) method, a U-Net method, a SegNet method, a Pyramid Scene Parsing Network (PSPNet) method, a DeepLab method, a Mask R-CNN, an Object Detection and Segmentation method, a fast R-CNN method, a faster R-CNN method, a You Only Look Once (YOLO) method, a fast R-CNN method, a PASCAL VOC method, a COCO method, a ILSVRC method, a Single Shot Detection (SSD) method, a Single Shot MultiBox Detector method, and a Vision Transformer (ViT) method (0042: characters corresponding to a character image or a pattern image which is stored as the image template is stored as text data. For example, if the image data of the character “Boom” is stored as the image template, the text data of “Boom” is stored as the text template: equivalent to the claimed “Object Detection and Segmentation method”).
Claim 16:
Sugaya discloses A computing apparatus comprising: a processor; and a memory storing instructions (0030-31) that, when executed by the processor, configure the apparatus to: (0060: the control unit 10 calculates a size of each bin included in the comic capture data); segment elements within the panels (0042: characters corresponding to a character image or a pattern image which is stored as the image template is stored as text data. For example, if the image data of the character “Boom” is stored as the image template, the text data of “Boom” is stored as the text template); apply the segmented elements to a first machine learning (ML) model to predict a narrative flow, the predicted narrative flow comprising an order in which the panels are to be viewed (Figs 5A, 5B and 0051-52: the control unit 10 reads the bin order database 31 stored in the storage unit 30, predicts the correct order of multiple bins included in the comic capture data based on a bin order learned in the past, and cause the screen display unit 50 to display the multiple bins in sequence according to the predicted order... the control unit 10 reads the bin order database 31 stored in the storage unit 30, performs image recognition on the comic capture data acquired in step S10, and searches the closest image template from the image templates stored in the bin order database); and assign, in accordance with the predicted narrative flow, index values to the respective panels, the index values representing positions in an ordered list that corresponds to the predicted narrative flow (Figs 5A, 5B and 0051-55: the control unit 10 reads the bin order database 31 stored in the storage unit 30, predicts the correct order of multiple bins included in the comic capture data based on a bin order learned in the past, and cause the screen display unit 50 to display the multiple bins in sequence according to the predicted order... the control unit 10 determines that the image template corresponding to the bin order identifier “1” is the closest image template... the control unit 10 reads a bin order template corresponding to the determined closest image template, and allocates a bin order corresponding to the read bin order template to the comic capture data acquired in step S10. FIG. 5 (B) shows an example of the comic capture data after the bin order is allocated... the control unit 10 extracts the bins one by one from the comic capture data in the allocated bin order, and displays them in the image display unit).
Claim 18:
Sugaya discloses determine scores representing uncertainties for the index values of the respective panels within the ordered list; flag panels for which the scores of the flagged panels exceed a predefined threshold; send, to a user device, the panels associated with the corresponding index values and indicia of which of the panels have been flagged; and modify the predicted narrative flow based on user interface inputs indicating corrections to the predicted narrative flow (0057: the order of bins displayed by the image display unit 50 may be incorrect. Therefore, the user may confirm the order of the bins displayed on the screen display unit 50, and corrects the order by manual replacement if necessary. The control unit 10 may perform replacement of the bin order according to the correction instruction operation. If the order is corrected, the corrected order is determined as the order of bins).
Claim 19:
Sugaya discloses ingest the graphic narrative; slice the graphic narrative into respective pages and determine an order of the pages; determine that panels on a page earlier in the order of the pages occur earlier in the predicted narrative flow than panels on a page that is later in the order of the pages; and display, on a display of a device, the panels according to the predicted order in which the panels are to be viewed (Figs 6A, 6B, 0054-56: the control unit 10 reads a bin order template corresponding to the determined closest image template, and allocates a bin order corresponding to the read bin order template to the comic capture data... the control unit 10 extracts the bins one by one from the comic capture data in the allocated bin order, and displays them in the image display unit... Comics are displayed in the image display unit 50 in an order corresponding to the bin order template. In FIG. 6 (A), a line inserted between bins represents a page break).
Claim 20:
Sugaya discloses display, on a display of a device, the panels according to the predicted order in which the panels are to be viewed, and rendering the panels in accordance with a screen size of the display of the device, wherein the device is an electronic reading device, a tablet or a smartphone on which is running an electronic reading application; or a website accessed via a web browser (0060-62: magnification may be changed according to the size of the screen to display the size of the bin. For example, the control unit 10 calculates a size of each bin included in the comic capture data, and disposes the each bin in a predetermined area of the storage unit 30. Moreover, the control unit 10 reads the image information database 33, and calculates, for each bin, the magnification that may be displayed by the image display unit 50 and may be displayed as large as possible. Furthermore, the control unit 10 enlarges and displays the each bin included in the comic capture data at the calculated magnification... in a case where the bin is small and difficult to see, the size of the bin may be changed such that the bin fits on the screen, as shown in FIG. 7 (B). With such configuration, it is possible to construct a comic data display system 1 in which a user is able to easily and visually recognize the contents even in a terminal with a small screen such as a smart phone... the bins are displayed in one column on each screen, but the bins may also be configured to be displayed in two or more columns according to the size of the terminal screen. For example, a smartphone may be set to display the bins in one column, and a tablet terminal having a screen of 10 inches or more may be set to display the bins in two columns. With such configuration, it is possible to achieve an excellent comic data display system that is easy to use while fully exerting the terminal's screen display performance).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 10 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugaya in view of Gralley (USPUB 20070171226 A1).
Claim 10:
Sugaya discloses every feature of claim 1.
Sugaya, by itself, does not seem to completely teach displaying, on a display of a device, the panels with a visual indicator directing a reader according to the predicted narrative flow.
The Examiner maintains that these features were previously well-known as taught by Gralley.
Gralley teaches displaying, on a display of a device, the panels with a visual indicator directing a reader according to the predicted narrative flow (0029: In addition to the face of the character being displayed on the frame, there are also several buttons, 505, 510 and 515 that appear on the screen. The button 505 is a forward button that allows a reader to advance to the next sequence. The button 510 is a backwards button that allows a reader to return to the start of the sequence. Finally, button 515 is a `start over` button that allows the reader to return to the beginning of the story).
Sugaya and Gralley are analogous art because they are from the same problem-solving area, managing the presentation of digital content on an electronic device.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Sugaya and Gralley before him or her, to combine the teachings of Sugaya and Gralley. The rationale for doing so would have been to provide controls to navigate the content being presented on the device.
Therefore, it would have been obvious to combine Sugaya and Gralley to obtain the invention as specified in the instant claim(s).
Claim 11:
Sugaya discloses every feature of claim 1.
Sugaya, by itself, does not seem to completely teach receiving reader inputs that control an advancement of the graphic narrative along the predicted narrative flow.
The Examiner maintains that these features were previously well-known as taught by Gralley.
Gralley teaches receiving reader inputs that control an advancement of the graphic narrative along the predicted narrative flow (0029: In addition to the face of the character being displayed on the frame, there are also several buttons, 505, 510 and 515 that appear on the screen. The button 505 is a forward button that allows a reader to advance to the next sequence. The button 510 is a backwards button that allows a reader to return to the start of the sequence. Finally, button 515 is a `start over` button that allows the reader to return to the beginning of the story).
Sugaya and Gralley are analogous art because they are from the same problem-solving area, managing the presentation of digital content on an electronic device.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Sugaya and Gralley before him or her, to combine the teachings of Sugaya and Gralley. The rationale for doing so would have been to provide controls to navigate the content being presented on the device.
Therefore, it would have been obvious to combine Sugaya and Gralley to obtain the invention as specified in the instant claim(s).
Allowable Subject Matter
Claims 14 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claims 2 and 17 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Note
The Examiner cites particular columns, line numbers and/or paragraph numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2123.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed in the attached PTOL-892 form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED-IBRAHIM ZUBERI whose telephone number is (571)270-7761. The examiner can normally be reached on M-Th 8-6 Fri: 7-12/OFF.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steph Hong can be reached on (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED H ZUBERI/ Primary Examiner, Art Unit 2178