DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Doherty et al. (U.S. Patent No. 12,050,863).
With regard to claim 1, Doherty teaches a method for reducing the size of computing device images ([col. 14, lines 65-66] the visual element can zoom a crop area—centered over a center of the face—out, such that the face is centered within the visual element and reduced in size; [col. 20, lines 45-50] as the user scrolls down the webpage to reduce an amount of the background image obscured by the mask), the method comprising:
identifying, by a computing device (Fig. 1; Fig. 2A, S140 and S155; Figs. 4-6; Figs. 8-9), an image to be used to configure the computing device ([abstract] visual object), wherein the image includes software programs and configuration information for use by the computing device during operation ([abstract] accessing a static visual objects, and media formats; defining a multi-dimensional feature space representing possible arrangements of combinations of the set of static visual objects within the set of media formats);
configuring, by the computing device, the computing device according to the identified image (Fig. 1; Fig. 2A, S140 and S155; Figs. 4-6; Figs. 8-9);
after configuring the computing device according to the identified image, identifying, by the computing device, a request for a graphical user interface (GUI) resource ([col. 14, lines 65-66] the visual element can zoom a crop area—centered over a center of the face—out, such that the face is centered within the visual element and reduced in size);
retrieving, by the computing device, static GUI configuration information associated (Fig. 1, static visual objects; [abstract] static visual objects) with the GUI resource from a remote server over a network ([col. 7, lines 7-15] executed by a remote computer system; [col. 8, lines 34-36] executed by a computer system hosted on a remote computer system, such as a remote server); and
in response to response to retrieving the static GUI configuration information, displaying, by the computing device, a visual representation of the GUI resource on a display device (Fig. 1; Fig. 2A, S140 and S155; Figs. 4-6; Figs. 8-9), wherein the visual representation of the GUI resource is based at least in part on the static GUI configuration information ([col. 5, lines 65] – [col. 6, lines 1-6] access a set of existing static media including a set of static visual objects; and extract the static visual objects to insert into a new media format to generate a responsive media. For example, the computer system can: access a static image including a set of icons and text boxes in a first static format).
With regard to claim 2, the limitations are addressed above and Doherty teaches wherein the image does not include the static GUI configuration information ([col. 15, lines 50-58] if the range of vertical positions assigned to the particular region of the static image file does not contain the new relative position of the visual element, the visual element can replace the particular region of the static image file currently loaded into the visual element with a different region of the static image file).
With regard to claim 3, the limitations are addressed above and Doherty teaches wherein static GUI configuration information includes at least one of Javascript data, Hypertext Markup Language (HTML) data ([col. 8, lines 40-48] a web server hosted by the publisher can return content or pointers to content for the webpage (e.g., in Hypertext Markup Language, or “HTML”, or a compiled instance of a code language native to a mobile operating system)), or Cascading Style Sheets (CSS) data.
With regard to claim 4, the limitations are addressed above and Doherty teaches wherein the static GUI configuration information is packaged in a modularized application format ([col. 5, lines 20-30] The computer system can further: present these responsive medias to an operator via an operator interface executing on a computing device; and refine these responsive media formats based on guidance from the operator; [col. 8, lines 40-48] a compiled instance of a code language native to a mobile operating system), including formatting for this content and a publisher tag that points the web browser or app to the publisher's computer system (e.g., a network of external cloud servers)).
With regard to claim 5, the limitations are addressed above and Doherty teaches wherein the request is a first request ([col. 7, lines 30-36] the computer system can request access to a set of drafted content within the user's social media account or the user can select one or more content instances and upload them to the computer system), the static GUI configuration information is a first static GUI configuration information ([abstract] a static visual object), the visual representation is a first visual representation ([col. 6, lines 7-11] the computer system generates a multi-dimensional feature space defining a graphical space including all combinations of arrangements of static visual objects with each media format), and the method further comprising:
identifying, by the computing device, a second request for the GUI resource ([col. 2, lines 39-50] in the secondary set of feature containers, the method includes: retrieving a secondary subset of static visual objects, in the set of static visual objects, represented in the feature container; retrieving a secondary media format, in the set of media formats, represented in the feature container);
retrieving, by the computing device, second static GUI configuration information associated with the GUI resource from the remote server ([col. 2, lines 39-50] in the secondary set of feature containers, the method includes: retrieving a secondary subset of static visual objects, in the set of static visual objects, represented in the feature container; retrieving a secondary media format, in the set of media formats, represented in the feature container), wherein the second static GUI configuration information is different than the first static GUI configuration information ([col. 2, lines 39-55] in the secondary set of feature containers, the method includes: retrieving a secondary subset of static visual objects…serving a first secondary responsive media, in the secondary set of responsive media, to a first device for playback to a first user responsive to inputs by the first user at the first device in Block S140); and
in response to response to retrieving the second static GUI configuration information, displaying, by the computing device, a second visual representation of the GUI resource on the display device ([col. 2, lines 39-43] in the secondary set of feature containers, the method includes: retrieving a secondary subset of static visual objects, in the set of static visual objects, represented in the feature container), wherein the second visual representation of the GUI resource is based at least in part on the second static GUI configuration information ([col. 2, lines 39-43] retrieving a secondary subset of static visual objects), and wherein the second visual representation is different than the first visual representation ([col. 2, lines 39-43] retrieving a secondary subset of static visual objects… serving a first secondary responsive media, in the secondary set of responsive media).
With regard to claim 6, the limitations are addressed above and Doherty teaches wherein the first static GUI configuration information is associated with a first language ([col. 8, lines 40-48] a web server hosted by the publisher can return content or pointers to content for the webpage (e.g., in Hypertext Markup Language, or “HTML”, or a compiled instance of a code language native to a mobile operating system)), and the second static GUI configuration information is associated with a second language different than the first language ([col. 10, lines 15-25] natural language processing, label detection techniques, face detection techniques, image attributes extraction techniques, etc. The static asset (e.g., a 300-pixel by 250-pixel static advertisement image) can include text blocks, color pallets, images (e.g., images of faces, objects, places), context tags, hyperlinks to external websites, and/or other content related to advertisement of a particular brand and/or product, which the computer system can then identify, label, and extract from the static asset).
With regard to claim 7, the limitations are addressed above and Doherty teaches wherein the request is a first request ([col. 7, lines 30-36] the computer system can request access to a set of drafted content within the user's social media account or the user can select one or more content instances and upload them to the computer system), the static GUI configuration information is a first static GUI configuration information ([abstract] a static visual object), the GUI resource is a first GUI resource ([col. 6, lines 7-11] the computer system generates a multi-dimensional feature space defining a graphical space including all combinations of arrangements of static visual objects with each media format), and the method further comprising:
identifying, by the computing device, a second request for a second GUI resource different than the first GUI resource ([col. 2, lines 39-50] in the secondary set of feature containers, the method includes: retrieving a secondary subset of static visual objects, in the set of static visual objects, represented in the feature container; retrieving a secondary media format, in the set of media formats, represented in the feature container); and
retrieving, by the computing device, second static GUI configuration information associated with the second GUI resource from the remote server ([col. 2, lines 39-50] in the secondary set of feature containers, the method includes: retrieving a secondary subset of static visual objects, in the set of static visual objects, represented in the feature container; retrieving a secondary media format, in the set of media formats, represented in the feature container), wherein the second static GUI configuration information is different than the first static GUI configuration information ([col. 2, lines 39-55] in the secondary set of feature containers, the method includes: retrieving a secondary subset of static visual objects…serving a first secondary responsive media, in the secondary set of responsive media, to a first device for playback to a first user responsive to inputs by the first user at the first device in Block S140).
With regard to claim 8, the system claim corresponds to the method claim 1, respectively, and therefore is rejected with the same rationale.
With regard to claim 9, the system claim corresponds to the method claim 2, respectively, and therefore is rejected with the same rationale.
With regard to claim 10, the system claim corresponds to the method claim 3, respectively, and therefore is rejected with the same rationale.
With regard to claim 11, the system claim corresponds to the method claim 4, respectively, and therefore is rejected with the same rationale.
With regard to claim 12, the system claim corresponds to the method claim 5, respectively, and therefore is rejected with the same rationale.
With regard to claim 13, the system claim corresponds to the method claim 6, respectively, and therefore is rejected with the same rationale.
With regard to claim 14, the system claim corresponds to the method claim 7, respectively, and therefore is rejected with the same rationale.
With regard to claim 15, the article claim corresponds to the method claim 1, respectively, and therefore is rejected with the same rationale.
With regard to claim 16, the article claim corresponds to the method claim 2, respectively, and therefore is rejected with the same rationale.
With regard to claim 17, the article claim corresponds to the method claim 3, respectively, and therefore is rejected with the same rationale.
With regard to claim 18, the article claim corresponds to the method claim 4, respectively, and therefore is rejected with the same rationale.
With regard to claim 19, the article claim corresponds to the method claim 5, respectively, and therefore is rejected with the same rationale.
With regard to claim 20, the article claim corresponds to the method claim 6, respectively, and therefore is rejected with the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Yu et al (US 2024/0096020) teaches generating moving viewpoint motion picture for an apparatus and reducing the amount of calculation of such.
Usuda (US 2023/0082499) teaches an information processing apparatus for including medical image data and reducing the number of false negative detection results.
C et al. (US Patent No. 10,319,116) teaches a system of providing dynamic color adjustment of electronic content and reducing the processing time for such.
Bonfiglio et al. (US Patent No. 11,386,590) teaches a system of color controls for visual accessibility within applications.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREA C. LEGGETT whose telephone number is (571)270-7700. The examiner can normally be reached M-F 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at 571-272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREA C LEGGETT/Primary Examiner, Art Unit 2171