Motorola isn’t playing games. The company has launched its two new foldable flip phones – the Razr 40 and 40 Ultra (also called the Razr +) – at aggressive price points, …
Apple has its new Mac hardware ready to launch at next week’s Worldwide Developer Conference. The M2 Max and M2 Ultra Apple Silicon chipsets build on the one-year-old M2 chipset, offering …
Okay, brace yourself. The first all-new product category from Apple since the unveiling of Apple Watch in late 2014 is about to be revealed it seems. It’s a new mixed-reality headset …
Artificial intelligence (AI) has made a significant impact in the last few years and AI has the potential to greatly impact the next generation of workers in both a positive and …
Artificial intelligence (AI) has made a significant impact in the last few years and AI has the potential to greatly impact the next generation of workers in both a positive and negative way as AI continues to become more progressive. The World Economic Forum’s “The Future of Jobs Report 2020” predicts 85 million jobs globally will be replaced by AI by 2025. The same report also indicates that AI can potentially generate 97 million new roles. However, the types of jobs that AI will create will differ from those that are being lost. New graduates are embarking into a different type of workforce and the data entry and processing positions which are mostly seen as entry level positions for graduates and people early within their career are now automated through AI. This puts newly graduates in a interesting position while seeking to enter the workforce.
The good news is that AI will enhance jobs that require problem-solving, creativity, and empathy to a new level which in return will create new opportunities like never before. The new generation of workers need to be able to adapt to the new changes in the workforce and if workers can transition and develop a strategic set of skills AI can potentially be seen as an asset and the best thing to boost your career. We are living in a fast past world and AI is making a continuous impact on today’s workforce. It important for people seeking to secure new positions to have a open mind but more importantly have a diversified set of skillsets that will drive impact for the company of chose.
How AI is transforming the workforce
Artificial intelligence (AI) is revolutionizing the way we work, and its impact on accessibility and equity in the workforce is profound. By leveraging AI technologies, organizations can address barriers, promote inclusivity, and create equal opportunities for individuals from diverse backgrounds. We will explore further how AI is transforming the workforce, enabling accessibility, and fostering equity.
Breaking Barriers through Adaptive Work Environments
AI-powered tools are reshaping work environments to accommodate individual needs. Voice recognition and natural language processing empower individuals with disabilities to communicate and interact effectively. Personalized workspaces, adjustable lighting, temperature, and other environmental factors enhance comfort and productivity. By adapting to the unique requirements of employees, AI promotes inclusivity and accessibility.
We see data that shows AI is promoting inclusivity and accessibility in many ways that impact workers. According to Accenture 84 percent of C-suite executives feel they must leverage AI to accomplish their growth objectives, but most haven’t put AI to work to advance growth through inclusion. The research suggests Many are not fully aware of the barriers to inclusion that may exist in their organizations, but 67 percent of C-suite respondents believe they’ve built a supportive workplace that enables their disabled employees to thrive with the right technology, environment and support. Nevertheless only 41 percent of employees with disabilities agree.
Advanced Inclusive Hiring Practices through AI:
According to LinkedIn’s research from the past year, 65 percent of organizations have set up diversity initiatives, such as Google and Microsoft, while 78 percent of companies focus on diversity to reinforce their cultures. Companies now have the emergence of AI-enabled tools and technologies to helping avoid issues and challenges around inclusive hiring. AI algorithms are revolutionizing the hiring process, promoting diversity and equitable opportunities. Traditional biases are mitigated by anonymizing candidate information during initial screening, allowing AI to focus solely on qualifications and skills. This approach levels the playing field and reduces unconscious biases that hinder underrepresented individuals. AI enables organizations to identify talent that may have been overlooked, fostering inclusivity in recruitment. Though we still have a long way to go, AI has played a impactful roll in dismantling some of the bias and discrimination within the hiring process making it possible for companies to achieve greater diversity and inclusion in hiring. Also, according to recent reports recruiters can use AI tools to discover high-potential candidates outside of job boards and other traditional sources. This is how employers can find more diverse candidates that may have been missed before.
AI is also making an impact around data-driven decision making for equity. AI’s analytical capabilities unlock valuable insights from vast amounts of workforce data, aiding evidence-based decision-making for equity. AI systems analyze patterns and trends to identify potential biases and inequalities within an organization. Armed with this knowledge, organizations can develop targeted strategies to address systemic issues, fostering diversity and inclusion.
AI tools are also playing a vital role in expanding access to job opportunities for minorities which gives candidates a better chance of landing a fulfilling position. By analyzing candidate skills, qualifications, and job requirements, AI-powered platforms provide personalized job recommendations, opening doors for individuals from diverse backgrounds. These tools broaden talent sourcing by identifying and connecting organizations with underrepresented candidates, fostering a more equitable workforce.
Bridging the Skills Gap
Overall the AI is making a significant impact to bridging the the skills gap in the workforce. AI-powered learning platforms offer personalized training and upskilling opportunities, irrespective of one’s background or location. By considering individual learning styles and preferences, AI fosters equitable access to educational resources. Individuals from underrepresented groups can acquire relevant skills, empowering them to pursue new job opportunities. AI bridges the skills gap, ensuring equitable access to career development.
8 Ways AI tools can assist minorities and recent graduates in securing new job opportunities
Resume Optimization: AI-powered tools like resume.io can analyze resumes and provide feedback on how to optimize them for specific job positions. These tools can suggest improvements in formatting, keywords, and content to increase the chances of resume visibility and attract the attention of recruiters.
Job Matching: AI algorithms can analyze job requirements and candidates’ skills, qualifications, and experience to match individuals from minority backgrounds with relevant job opportunities. By considering a broader range of factors beyond traditional qualifications, AI tools can identify job openings that may be a good fit for minority candidates.
Skill Development: AI-powered learning platforms can provide personalized training and upskilling opportunities tailored to the needs of minority job seekers. These platforms can identify skill gaps and recommend relevant courses, tutorials, or resources to enhance candidates’ competencies and increase their chances of securing job opportunities.
Interview Preparation: AI tools can assist candidates in preparing for interviews by providing mock interview simulations and feedback. These tools can analyze candidates’ responses, assess their performance, and offer suggestions for improvement, helping them build confidence and enhance their interview skills.
Networking and Mentorship: AI-powered platforms can facilitate networking and mentorship opportunities for minority job seekers. These tools can connect individuals with professionals from similar backgrounds or industries, providing guidance, advice, and valuable connections that can enhance their job prospects.
Bias Detection and Mitigation: AI tools can help detect and mitigate biases in job postings, application screening, and candidate evaluations. By analyzing language, keywords, and historical data, these tools can identify potential biases and provide recommendations for inclusive and equitable practices, promoting fair hiring processes.
Accessibility Accommodations: AI can enhance accessibility in the job search process for individuals with disabilities. AI-powered tools can provide features such as screen readers, voice recognition, or captioning to make job search platforms and applications more accessible, ensuring equal opportunities for all candidates.
Overall, AI is here to stay and it’s important for the workforce to understand how to leverage AI to work for you in your future career goals. AI tools can be valuable and can complement your efforts and not replace human interactions and personalized job search strategies. It is crucial to tailor your approach and utilize relevant AI tools to maximize your benefits in your job search journey.
The huge success of the latest games consoles together with their ability to deliver telly-friendly graphics features such as 4K resolutions, high dynamic range and high/variable refresh rates has seen TVs …
The huge success of the latest games consoles together with their ability to deliver telly-friendly graphics features such as 4K resolutions, high dynamic range and high/variable refresh rates has seen TVs competing more and more aggressively on both gaming performance grounds and, more recently, the number of streamed gaming services they support.
Samsung was the first brand to introduce a dedicated Gaming Hub menu area and special low-latency playback features to its TVs’ smart interfaces a few years back, and now it’s expanding its already strong roster of gaming services (which includes Xbox, NVIDA GeForce Now, Amazon Luna and Utomik) with two new additions: Antstream Arcade and Blacknut.
Antstream Arcade claims to be the world’s largest cloud-based retro gaming service, carrying more than 1,400 full classic games alongside more than 500 ‘mini challenges’. Antstream’s library includes the likes of Pac-Man, Galaga, Dig Dug and Double Dragon, and every single title will be accessible via the Samsung Gaming Hub on select Samsung smart TVs and Smart Monitors on a subscription basis that at the time of writing costs just $12/£12 for a whole year of access thanks to a current 70% discount offer.
“We’re proud to align with strong partners like Samsung Gaming Hub that can help us reach more players,” says Steve Cottam, CEO of Antstream Arcade. “We’ve experienced firsthand the excitement people have playing the games of their youth and sharing those games with the next generation of gamers.”
Blacknut, meanwhile, promises a focus on family-friendly titles, giving subscribers access to more than 500 premium games – including AAA releases – across a wide range of genres, via five simultaneously accessible player profiles. Highlight titles include Metro Exodus, Overcooked! 1 & 2, TT Isle Of Man: Ride On The Edge 2 and Tour De France 2022. The latter two of which, Samsung says, mean gamers can immerse themselves in the online races as they watch the actual real 2023 events live on their Samsung TVs. European subscribers additionally currently have access to a full selection of ‘Disney favourites’.
The Blacknut service backs up its family friendly approach by featuring easy to set up optional parental controls, backed up by careful curation of its titles. The parental options available include a separate pin-protected profile where junior gamers can access child-friendly titles such as adventure games based on favourite TV shows such as Paw Patrol and Gigantosaurus.
It’s possible to sync as many as four Bluetooth controllers at once via the Blacknut platform, so that the whole family can join in co-op or party games.
Blacknut is offering subscribers who join via a Samsung TV or Smart Monitor an exclusive 15-day free trial, during which you will be play any of its 500+ titles to figure out if the regular $15.99/£12.99 a month cost is going to be worth it to you.
Both of the new Antstream Arcade and Blacknut services are available now via the Gaming Hub on select 2021, 2022 and 2023 Samsung TVs, while owners of backported 2021 and 2022 TVs in markets where the Samsung Gaming Hub is not available will be able to find the new apps in their TV’s main App Store.
Former IRS Commissioner David Kautter, now with RSM, discusses the recently released report on the feasibility of an IRS-run direct file program. This transcript has been edited for length and clarity. …
Former IRS Commissioner David Kautter, now with RSM, discusses the recently released report on the feasibility of an IRS-run direct file program.
This transcript has been edited for length and clarity.
David D. Stewart: Welcome to the podcast. I’m David Stewart, editor in chief of Tax Notes Today International. This week: cutting out the middleman.
On May 16, the IRS released a report on the feasibility of an IRS-run free return filing system. While there have been options to prepare taxes for free for several years, the Free File program was run by a group of tax preparation companies. Some of these companies had come under scrutiny in recent years for directing eligible taxpayers to paid services.
So is this new program a good alternative? Here to talk more about this is Tax Notes senior reporter Jonathan Curry.
Jonathan, welcome back to the podcast.
Jonathan Curry: Hi again, Dave.
David D. Stewart: So to start off, could you tell us some background on what Free File is?
Jonathan Curry: Yeah, I’d be happy to. The Free File program is an over-two-decades-long, public/private partnership between the IRS and what’s called the Free File Alliance, those companies that you referred to that prepare tax returns.
That started with the IRS Restructuring Reform Act of 1998, 25 years ago. That law set a goal for the IRS to have 80 percent of taxpayers file electronically by 2007. And the IRS decided that the best way to do that was to enlist the help of the private sector, which all sounds good on paper because that frees up the IRS to do other things. They don’t have to worry about developing their own system, maintaining it, overseeing it. They can just kind of hand that off to someone else. In practice, I think most would say it’s been disappointing at best. Some people have stronger words to say than that.
The use of the Free File program peaked in 2005, and it’s really only just muddled along ever since then. The program took a nosedive after a series of ProPublica articles back in 2019, which you alluded to earlier, that showed how these Free File member companies were intentionally steering taxpayers away from free services and towards paid products instead. Last year, Intuit, which owns TurboTax, agreed to pay out a $141 million settlement over the claims that it scammed users with claims of “free, free, free.”
FOSTER CITY, CA – JANUARY 28: TurboTax products sit on display at Costco on January 28, 2016 in … [+] Foster City, California. (Photo by Kimberly White/Getty Images for TurboTax)
getty
So in practice, only about 2 to 3 percent of the taxpayers who are otherwise eligible for Free File, as in that they meet these income limitations, are actually using it. In that context, the IRS has spent the last couple months studying whether it should throw its hat into the ring and set up a free filing system of its own. They’ll let taxpayers file their returns directly with the IRS instead of going around through an intermediary to file their taxes. This is commonly referred to as direct file because you’re filing directly with the IRS.
This idea does have some very vocal supporters in Congress, Senator Elizabeth Warren chief among them, I would suppose, and Treasury Secretary Janet Yellen last year at a congressional hearing said that the IRS was going to do this, it’s just a matter of timing and funding.
Fast-forward a few months after her statement there, the IRS gets $80 billion as part of the Inflation Reduction Act, and within that $80 billion big, massive chunk of change, there’s a small little provision that gave the IRS $15 million to study the question of what it would take for the IRS to do this.
David D. Stewart: What did the IRS have to say in its report?
Jonathan Curry: Yeah, so that report came out [May 16] and there was a lot in there about taxpayer attitudes towards it. They surveyed taxpayers. They also developed what they call a prototype, a functioning prototype of what a direct-file system would look like to a taxpayer. It was limited functionality, but it led someone to actually see it, if you were to click through the menus and prepare your taxes that way.
It also explored the pros and cons from an administrative perspective, what it would require from the IRS, the different stakeholders that would be involved, like state tax administrators and things like that. It didn’t really come to any sort of neat and tidy conclusion, though. It just sort of just explained the different aspects of what would be involved here.
David D. Stewart: Now I understand you recently talked to a guest about this. Could you tell us who you talked to?
Jonathan Curry: Yeah, sure. I talked to David Kautter. He’s a former acting IRS commissioner. He is a former Treasury assistant secretary for tax policy. Currently, he’s at RSM and he’s also a member of our board of directors here at Tax Analysts. And like I said, he’s a former IRS commissioner. He’s someone that would know what this would take from an inside perspective, and I think that’s a pretty valuable perspective to have here.
David D. Stewart: What sort of issues did you get into?
Jonathan Curry: Well, we started with his impressions of the report, and I don’t want to spoil too much, you’ll have to listen for that, but I’ll just say upfront, he’s a little bit disappointed with what he saw there. There’s a lot of unanswered questions in his mind, especially as someone who’d be thinking about what it would take for this to actually unfold.
He has a lot of concerns over the practicality of the IRS getting into the direct-file game, whether it really is going to be this boon to taxpayers that advocates hope for, if there’s a gap between reality and the ideal here, and also some of the obstacles the IRS is going to need to overcome.
In particular, one of the issues we’re really going to talk about a lot is the challenge of state tax return filing, which you might be wondering, who cares about that? We’re talking about the IRS. Well, taxpayers. I don’t know about you, but when I file my taxes, I file my state and federal at the same time. And for the IRS to be doing that for you, suddenly makes this into a bigger program than the IRS might want to get into.
David D. Stewart: Well, all right, let’s go to the interview.
Jonathan Curry: I’m sitting here with former IRS Commissioner Dave Kautter, former Treasury assistant secretary for tax policy, longtime fixture in the tax community. Dave, welcome to the podcast.
David Kautter: Thank you, Jonathan. It’s nice to be here.
Jonathan Curry: Let’s jump right in. The IRS released its long, I guess, much-ballyhooed, long-anticipated report on the feasibility of a direct-file program. Basically, the IRS allowing taxpayers to file directly with them instead of going through an intermediary like TurboTax and whatnot. The report came out last week and I’m curious, did this answer every question you had about what the system might look like or are there a few unanswered questions there for you?
David Kautter: The short answer is no. The report’s helpful in some areas and not helpful at all in other areas. Many questions are unanswered in the report. In fact, I’d probably say there are more unanswered questions than there are answered questions.
But in my mind, the three most significant unanswered questions in the report are first, which taxpayers would be able to use direct file and when? In other words, what types of income could be reported, will there be limits on the amount of income that could be reported? What types of activity could be allowable? In other words, on the right types of activity question, there are tens of millions of taxpayers who are relatively low-income, who are independent contractors, the so-called gig workers — will they be allowed to participate in the direct-file program? So the first big unanswered question is, who and what types of income?
Second is, what will the user interface look like? Will it be easy to understand and use?
And third, and this is a point which I think is understated consistently in the report, is what about state tax returns? There are 43 states that impose some sort of individual income tax, 41 impose taxes on wages, one imposes a tax only on interest and dividend income, one only imposes a tax on capital gains. But that is a significant question that the report acknowledges, but I think dramatically understates the significance of.
Jonathan Curry: There was a MITRE [Corp.] study that came out just before this report was released that was examining taxpayer attitudes towards a hypothetical IRS direct-file system, and they offered a couple different scenarios. But one of the key takeaways was that state tax filing, the ability to file your state taxes simultaneously with the federal, was basically the big defining factor between whether people were more likely to want to switch to an IRS-run system versus sticking with whatever they’ve used previously.
David Kautter: I think that’s exactly right. The ability to only enter data once is significant for taxpayers. If under this system that’s being proposed, taxpayers had the option to file for free with the federal government directly with the IRS, and then they had to take the same data, go to a commercial software preparation program, enter the same data in, and pay to file the state return, the attractiveness of the direct-file program goes down dramatically.
WASHINGTON, DC – AUGUST 18: The Internal Revenue Service (IRS) building on Thursday, Aug. 18, 2022 … [+] in Washington, DC. (Kent Nishimura / Los Angeles Times via Getty Images)
Los Angeles Times via Getty Images
Jonathan Curry: Do you have any sense of how difficult it would be for the IRS to make that function possible, to make it a simultaneous filing [of] state and federal?
David Kautter: What’s interesting about the report to me is it talks about working with state administrators and that’s always important, but I can’t imagine 43 states are going to develop a direct-file program for taxpayers. And so there’s going to have to be an effort here to resolve this problem one way or another, or I just don’t think the IRS direct-file program is going to be very successful at all.
Jonathan Curry: My colleague, Lauren Loricchio, wrote a story, too, after this report came out. She spoke with the head of the FTA — which I think it’s the Federation of Tax Administrators, it’s for state tax administrators — and the head of that said that they look forward to hearing from the IRS, but at this point, that collaboration has not yet started, which is perhaps a bit surpris[ing] given that the report describes an effort to launch a pilot program in 2024. So perhaps state filing might not be on tap for at least the initial pilot.
David Kautter: Exactly. There are only two ways to get this done that I can see, right? The states build direct-file programs or the IRS builds 43 programs. Every state’s tax laws are somewhat different; some are dramatically different from other states. And so building a federal direct-file program is one thing. Building 43 state direct-file programs is [a] completely different magnitude, and the report acknowledges the importance of being able to file on a consolidated basis, but it just doesn’t deal with it directly. And I think that is one of the most significant issues, for the reasons I stated earlier.
Jonathan Curry: Can you do me a favor and describe, in a sense, the best- and worst-case scenario? So you’re basically saying, the best arguments for the supporters of this, the best arguments from critics of this. What’s the best thing that could come from this and what’s the worst way this could explode and fall apart?
David Kautter: Sure. Well, let me focus on this from a policy point of view and administrative point of view. I think the best case is the IRS builds an effective, focused prototype that’s easy to use, secure, and popular. The prototype is piloted next filing season, and the prototype is then expanded, best case.
Worst case, IRS diverts resources from other significant technology priorities, ends up with a prototype that’s expensive, doesn’t work very well, and is not widely used, and then discarded. So those are the two extremes to me.
Jonathan Curry: Sure. OK. Fair enough. What do you think it would take for something like this to succeed?
David Kautter: Sure. I think there are three core elements, Jonathan. First, a user-friendly interface that’s easy to understand and use. Second, there has to be high-quality, responsive customer service. Questions — no matter how simple, no matter how straightforward the software is — there’re going to be questions. And if those questions can’t be answered, taxpayers are going to quickly abandon the system. And third, state tax return capability, reentering data and paying to file the state return when you’ve just filed a federal return for free, I think is a significant issue.
Jonathan Curry: What about some of the practical, or I guess you call them operational, challenges for the IRS? We talked about the need to collaborate with state tax administrators, but this would also cost money. Would the IRS need to get more funding for this? What do you sort of see?
David Kautter: Sure. The funds are important. I think the single most important operational obstacle is mindset. Currently, the IRS does not provide the level of proactive support that this sort of a system would need. So in other words, right now the IRS does a pretty good job of generating publications, and last year did a very good job of answering the phone.
A direct-file system changes the mindset. To me it’s sort of like the difference between sitting behind a cash register and having customers walk up and pay for a product, as opposed to a sales representative who has to go out and make sales calls and actually convince people to use the product. And so it is this mindset which needs to change, and you see it not just in the government or the IRS, you see it in private industry all the time. People who are their whole career, reactive, are asked to be proactive, build something that people really want, as opposed to something people are forced to do.
And it is a completely different mindset and it adds an entirely new dimension. You have to be more proactive, you have to be more customer-oriented. And I don’t want to say it’s the complete opposite, because the IRS engages in outreach and is concerned about taxpayers understanding the tax law. So it’s not a complete shift, but it’s a significant shift.
And sure, the money’s important, and sure, the software has to be written in a user-friendly fashion, and you need customer support, but it’s this viewing the taxpaying public and their compliance activity in a proactive fashion as opposed to a reactive fashion that I think is the biggest operational obstacle here.
Jonathan Curry: It’s interesting, we were talking about with the prototype that they tested, they tested a functional prototype of what this would look like, and it was morphed so that people could see what this looks like. They tested, I think, with only about 14 people, according to the report, but nevertheless, they tested it and they said the feedback to that was actually surprisingly good.
People responded saying, they’re like, “Oh, this actually is better than I anticipated the IRS would come up with.” So it seemed like it might have been more perhaps user-friendly that you might otherwise envision a state government, federal agency, coming up with something.
David Kautter: It’s hard to interpret exactly what that means. I had a friend who used to say, “Just because you end up on third base doesn’t mean you hit a triple. You might have been born there.” And I think in a case like this, you worry that the bar is so low, that people expected it to be such a miserable experience, that when it wasn’t, they said, “Well, this is pretty good.”
Again, it’s hard to tell from the wording in the report, but I wondered whether the bar was so modestly low that it was easy for the people who were part of the 14 who got to use this piloted software thought it was better than they expected.
Jonathan Curry: Now, you were talking about how one of the advantages the private sector has is they need to go out and get people to want their product, whereas the IRS might just be simply like, “Here’s a product, use it if you want it.”
Is the allure of free, because the IRS, as I understand, they envision this being a free service, is that going to be enough to get people to switch their return filing provider of choice from say, TurboTax, to the IRS?
David Kautter: I think free will be enough for some taxpayers, but it’s hard to tell how many. The state return issue, I’ll bring up again. It’s a huge issue. Even though it’s free, to then turn around, have to reenter the data and pay to file the state tax return, I think will disappoint many people who might otherwise use the IRS direct-file.
Jonathan Curry: And then how does the issue of the IRS developing and running this direct-file program sit with Congress right now? Does it fit neatly within a partisan box or is there some crossover there? Where do you sort of see that?
WASHINGTON, DC – JANUARY 21: The U.S. Capitol is seen at dusk, January 21, 2018 in Washington, DC. … [+] Lawmakers are convening for a Sunday session to try to resolve the government shutdown. (Photo by Drew Angerer/Getty Images)
Getty Images
David Kautter: Right now I think it’s in the formative stage. It’s not clear where most members of Congress stand on this at the moment. So far, the discussion has taken on a partisan aspect to it, which is not healthy if you want a free and open and full consideration of whether this direct-file approach makes sense. I think the hope of turning around the early opposition that some in Congress have expressed will depend on a successful rollout of a prototype next year.
I think if the prototype goes very well and people feel that the data they’re given is objective and that the user response has been favorable, that will go a long way toward turning around some of the resistance.
But if people feel they’re being spun and they don’t feel they’ve gotten a straightforward story with respect to the user experience, I think it’ll get dramatically worse.
Jonathan Curry: Along those lines, Commissioner [Daniel] Werfel has been in the hot seat actually lately after it was revealed in the report that the IRS was already developing a pilot program to test this out or to try it out in 2024. The view is that the IRS should have waited for the results of this report, debated it in Congress, waited for some sort of authorizing, “Go forth and do this,” rather than plunging ahead.
You’re a former IRS commissioner, so you’re well suited to know what it’s like to sit in that seat. I hate to make you be the backseat driver to a current commissioner, but I am curious to know, do you think this might have been a misstep, could have been handled differently?
David Kautter: Well, I think it’s very unfortunate that the administration moved forward in the way that it has. I think direct filing with the IRS is an issue that deserves full and open consideration if it’s going to have any chance of bipartisan support.
Instead, the process followed so far, specifically building and modeling a prototype and making plans to launch a direct-file program next year before the report was ever finalized and delivered, has the appearance that the study was perfunctory, that the surveys and the reports were irrelevant to the decision to move forward, and that the federal government wasted $15 million on a report that was irrelevant to the decision-making process.
It’s unclear to me who drove this process, but in my experience, this is not how the IRS tends to operate. So my guess is that this process was driven elsewhere in the administration, but I do not have firsthand knowledge of how this process—
Jonathan Curry: You’re not currently in the commissioner’s seat.
David Kautter: I am not.
Jonathan Curry: OK, understood. Well, let’s talk about costs. The study was $15 million for this. The report says the estimated cost of developing and running this, the annual cost is between $64 million and about $250 million per year, depending on a whole range of factors. And even that wide-ranging estimate is subject to significant uncertainty according to the report.
Also in Washington, everyone likes to talk about cost in 10-year increments, [so] $64 million or $250 million turns into $640 million to a $2.5 million price tag in our typical way of thinking about the cost of things. First question on that, does that price tag seem realistic?
David Kautter: I think the greatest understatement in the entire report is its statement about “considerable uncertainty” around the development cost. At one point in my career, I was responsible for the tax technology budget of a Big Four accounting firm. Experience tells me that it is virtually impossible to anticipate all the problems involved in a project of this magnitude.
I think the costs are substantially understated for at least three reasons. First, the IRS has no experience in this area. Second, the IRS doesn’t have the capacity to build something like this right now. So this initiative is going to require the IRS to hire people with the right skills or outsource the development or a combination of the two. And third is the issue we’ve talked about earlier in this podcast, which is the state tax capability.
I don’t know how IRS proposes to solve that problem, but when the report says there’s considerable uncertainty, I would say that’s understated dramatically.
Jonathan Curry: The price tag estimate that we talked about, the $64 million and $250 million, is based off an estimate where, again, probably considerable uncertainty, but they talk about servicing about between 5 million and 25 million taxpayers.
The Free File program, the Free File Alliance, where the private sector is supposed to be providing free return preparation for low-income taxpayers, only had about 4 million users for tax year 2020. What does something like this do to the tax preparation industry? Is this the apocalyptic end of TurboTax, Intuit, and all the above?
David Kautter: Well, I think private industry is going to be just fine in this area, especially in the short term. And the reasons I say that are, brand loyalty is a compelling motivation for many consumers, including with respect to tax software. There is a certain comfort that comes with using the same software year after year. You become familiar with it, you become comfortable with the questions. Third, I’d say ease of already having last year’s data in a system makes life simpler. You can compare last year to this year.
Concerns about the IRS being the preparer, reviewer, and the enforcer are going to discourage some taxpayers from using the program. And I think some will use the IRS software, I’m sure of that, but I don’t think the tax preparation software industry has much to worry about, especially in the short term.
Jonathan Curry: And again, I’m going to use this metaphor, I’m beating it to death here. You wore the commissioner’s hat for a time, you’ve been in the office there. Do you think the IRS wants to be doing this? Now to some extent, perhaps it doesn’t matter what they want to do, they do what they’re told to do by Congress, Treasury, and so forth. But within the building, is there a sort of eagerness to go in this direction?
David Kautter: I don’t know where the current commissioner and administration stand on this at the moment, but I would say the IRS has a long list of technology priorities. In 2019, they issued a modernization report which listed dozens and dozens of programs that they needed to improve or develop in order to serve taxpayers effectively.
It is hard to believe that all of a sudden, direct file has catapulted from not on that list at all, to No. 1 on the list. So I think that the IRS has done a really good job of identifying what it needs to do to improve user experience with the IRS.
It’s in the process of trying to build some of that software with the infusion of cash from the Inflation Reduction Act. They have the funds to accomplish that, but direct file was not something when I was in the acting commissioner job that was high on our list.
USA flag, dollars and inscription inflation reduction act papers.
getty
The commercial software vendors have good products, they work well. Data got into the system effectively, the IRS had an effective operating partnership with those software vendors. And so IRS was inclined, at least when I was there, to focus on areas [which] needed support, where you could substantially improve the taxpayer experience, and this was not one of them.
It seems to me in part, this is being driven by a desire to help taxpayers save money by being able to file. And there is something irrational about a tax system where the federal government wants you to file electronically and won’t allow you to file directly with them electronically, where you have to go through a software vendor.
So I understand the arguments and I understand the irrationality of it. I just have a hard time believing that this is top of the career folks’ list at the IRS when it comes to technology initiatives.
Jonathan Curry: And I think the IRS report itself, as well as the independent evaluation included in the IRS’s report by New America and their external people helping them out with that, they both came to the conclusion that if this is to succeed, it’s going to require leadership, IRS leadership, to really commit to this. I think you talked about this earlier, a half-hearted attempt to just like, “Here’s a direct-file thing. You can use it if you want.” It’s not going to get off the ground well.
David Kautter: This is going to take a sustained, focused effort to convince taxpayers to use the new software, the direct-file system. As you pointed out, Jonathan, earlier, we’ve got a Free File system now where taxpayers can file for free their federal return.
There are around 70 million taxpayers who are eligible for Free File and about 3 percent of those taxpayers use that system. Why a dramatic number of taxpayers, in excess of the 3 percent, would all of a sudden decide to use a direct-file system that the IRS has come up with instead of the commercial software that’s available for free, is not clear to me.
So I think the only way this becomes widespread is the system is easy to use. There’s plenty of customer support. The IRS advocates actively that taxpayers can benefit from using this system, that it is for free. And I think that’s the only chance this system has of prospering going forward. And I can’t help but mention, again, you’ve got to solve the state tax return problem, I think, or this will always be just a shadow of what it could possibly be.
Jonathan Curry: All right. Well, Dave, thank you so much for being here. It’s been a pleasure talking with you about Free File.
David Kautter: Jonathan, thanks for inviting me. It’s an interesting topic and it’s going to be interesting see to see how it evolves.
Artificial intelligence is advancing so rapidly, it can be hard to speculate about its effects on elders. What we can see is that applied to healthcare, it could be very positive. …
Artificial intelligence is advancing so rapidly, it can be hard to speculate about its effects on elders. What we can see is that applied to healthcare, it could be very positive. For older adults with multiple chronic conditions, there are different doctors with different perspectives, each a specialist in one thing. For anyone who has experienced what we call “fragmentation” in our healthcare system, you may see that the doctors don’t always communicate with one another.
I can imagine a central database for each patient created immediately with AI, so that any physician could immediately see a full medical history and how an issue compares with every factor known or unknown about the symptoms and signs. Based on that vast store of information, an accurate diagnosis and treatment plan is created.
How It Is Now-Too Much Guesswork
That would be in sharp contrast to how things are done now. Several treating doctors who do not see each other’s records would know right away what other providers have diagnosed, planned, and what medications are prescribed. Guesswork would be reduced. Different perspectives would be compared and conclusions drawn. The need for a second opinion would be eliminated.
Another benefit would be complete information about every medication, its side effects, warning, and contraindications. Clearly, some aging parents are getting the wrong medication for their individual profiles. Selecting the right ones, now a matter sometimes of hurried decision-making, would be streamlined. The possibility to improve medical care in general could vastly improve. For elders in particular, often with multiple health conditions, medical care could get a lot better. AI could vastly improve the accuracy and individualization of treatment.
The Possible Dark Side Of AI On Elders
We already know that AI has frightening capabilities. Seniors who use the internet can be on it without true regard for the dangers. Financial elder abuse is a multi-billion dollar industry in the U.S. One particularly evil part of this is the “Grandma scam” featured at one time on the broadcast of TV show, 60 Minutes. The program demonstrated several older adults, none with cognitive impairment, who were all scammed by the fake phone call from someone posing as a grandchild or other relative, claiming to need money urgently. All of the featured adults fell for the scam and gave money to a courier, supposedly for the relative in distress.
Without AI, we do have caller ID and we can listen to the caller and respond if we don’t recognize the voice on the phone. The scammers make excuses (”I have a cold”). But a strange voice at least can cause someone to get suspicious. With AI, it is possible not only to spoof the caller ID to make it look as if the call really came from the grandchild, it is also possible to construct a duplicate of the actual grandchild’s voice. That is frightening! If the targeted victim sees a grandchild’s name on caller ID and hears the AI-generated voice of that person, the scammers will readily get even more success with their evil plots to steal money.
I can also imagine AI being used to collect data on any aging parent from every possible source: social media, public records, family tree information from the internet, etc. A profile of all potential victims could be created to give the scammers ready targets in minutes. These are just some of the imagined downsides to the power of AI.
As I do not have a criminal mind myself, I am only able to go on the facts of elder abuse as I know them now to guess at what other evil can be done to elders. But we do know that anytime there is any opportunity, thieves will jump on it to steal. In my imagination, that is the most terrifying aspect of AI as we see descriptions of its boundless capabilities.
What Is Needed
Of course, developers of AI themselves say that regulation is needed right now. And we can imagine how difficult it will be to regulate something this new, this unprecedented in this rapidly developing technology. Regulation can go so far. Some developers of AI have opined that if unchecked and unregulated now, it could destroy us.
We don’t yet know what of AI’s seemingly unlimited capabilities will be most likely to help our aging parents. We can say at this moment that no computer generated information can have morals, compassion or ethics. These look like distinctly human characteristics. Can AI generate those too? Maybe not. A tech device cannot put a caring hand on an elder’s shoulder and say, “don’t fall for that scam”. Personal vigilance in watching closely over our aging parents and other loved ones will always be needed.
In an attempt to accelerate RISC-V adoption, a global consortium of industry leaders has banded together to form the RISC-V Software Ecosystem (RISE) Project. According to the project’s press release, its …
In an attempt to accelerate RISC-V adoption, a global consortium of industry leaders has banded together to form the RISC-V Software Ecosystem (RISE) Project. According to the project’s press release, its primary objective is to ensure rapid “availability of software for high-performance and power-efficient RISC-V cores running high level operating systems”.
In an interview regarding the project’s launch, Amber Huffman, Data Center Ecosystem Lead at Google and RISE Chairperson made it a point to emphasize that this project is not meant to replace, overshadow or otherwise conflict with RISC-V International’s efforts. In fact, in order to be a member of RISE, companies and contributors first need to be RISC-V International members. Additionally, the effort is about coordinating and prioritizing efforts as well as collaborating on solving problems that remove roadblocks to adoption. Ideally, this will remove duplication of effort and get everybody working toward common goals thus creating the most efficient path to widespread adoption.
To this end, the project has identified the following areas for its initial focus: compilers and toolchains, system libraries, kernel and virtualization, language runtimes, Linux distro integration, debug/profiling tools, simulator/emulators, and system software.
Current members of the governing board include but are not limited to Google, Intel, Imagination Technologies, MediaTek, Nvidia, Qualcomm and SiFive. It is still early on with members having only met for 6 weeks but it is clear that members are committed to the initiative, having literally bought in with either a €100,000 investment or the equivalent of two full-time engineers working on technical solutions.
“Accelerating RISC-V support in the open-source software ecosystem, aligned with platform standards, is critical to the growth of RISC-V adoption,” said Mark Skarpness, Vice President and General Manager of System Software Engineering at Intel. “Intel is pleased to join other industry leaders in the formation of RISE to further this goal.”
Qualcomm agrees. “With a proven track record of technology innovation and leading software expertise, Qualcomm Technologies is committed to helping expand the RISC-V software ecosystem. RISC-V’s flexible, scalable, and open architecture enables benefits across the entire value chain – from silicon vendors to OEM manufacturers to end consumers,” said Larry Wikelius, Senior Director of Technical Standards, Qualcomm Technologies, Inc.
From an end-market standpoint, each RISE member has different target markets based on each company’s strategic business objectives. “NVIDIA’s accelerated computing platform — which includes GPUs, DPUs, chiplets, interconnects and software — will support the RISC-V open standard to help drive breakthroughs in data centers, and a wide range of industries, such as automotive, healthcare and robotics,” said Kevin Kranzusch, Vice President of System Software at NVIDIA. “In joining RISE, we look forward to collaborating with other industry leaders across a variety of disciplines to grow the RISC-V ecosystem.” However, Huffman did indicate that consumer and mobile applications are providing a lot of the driving requirements, from which solutions for the other verticals will likely be derived. This makes sense given the size, scale, and breadth of those markets, which provide the opportunity to solve problems on a systematic basis as opposed to solving unique, one-off issues.
These markets are currently being served by both x86 and Arm architectures. However, this project couldn’t have come at a better time in providing the industry with a viable, open processor alternative, and is similar to how the industry enabled the Arm ecosystem. As workloads continue to get more demanding including the increased demand for artificial intelligence (AI) and more specifically, generative AI, there is increasing movement towards more specialized and sometimes customized processing and instruction sets. The workload challenges, along with well-documented challenges at Arm, have opened up a window of opportunity that is ripe for RISC-V to RISE out of the theoretical primordial soup and further evolve into a real, commercially viable option.
While there is a growing list of semiconductor vendors and OEMs developing and adopting RISC-V CPU cores, Tirias Research believes that the formation of RISE is a critical step towards widespread adoption of the architecture. Accelerating the development of the software ecosystem will further proliferate the use of RISC-V over the next few years.
As I’ve written about recently, for years Synopsys has been using AI to push the envelope on the electronic design automation (EDA) software that makes today’s semiconductors possible. Now its latest …
As I’ve written about recently, for years Synopsys has been using AI to push the envelope on the electronic design automation (EDA) software that makes today’s semiconductors possible. Now its latest quarterly numbers are out, and if anything I’m even more impressed by what the company is doing and the impact it’s having in the industry.
At a moment when lots of companies are shaky because of broader economic conditions—and when the chip industry is facing lower demand—Synopsys’ numbers are moving up and to the right in every important area. Sure, some of this have to do with the offset between designing chips and building and inventorying them, but it’s still impressive.
The latest results blew past previous guidance, the company hit another record for quarterly revenue, operating margin is up, EPS is up and the company raised its guidance for annual revenue. It’s all a testament to strong execution of its “Smart Everything” strategy that Synopsys launched more than a decade ago.
After the earnings call, I had a chance to talk with CEO Aart de Geus and CFO Shelagh Glaser to get the Synopsys perspective on what’s happening at an interesting time for AI and the tech landscape as a whole. In this piece, I’ll share some of the highlights of that conversation and where they see things heading in the months and years to come. Let’s dig in.
Synopsys beats guidance for revenue, EPS and margins
First, the numbers. Quarterly revenue of $1.395 billion was above the high end of guidance, as was non–GAAP EPS of $2.54. The company generated $703 million in quarterly operating cash flow, and its non-cancelable backlog rose to $7.3 billion.
Given how it’s firing on all cylinders, the company now expects revenue for the full year to be about $5.8 billion, or 14 to 15% higher than its $5.08 billion of revenue in fiscal 2022. In line with that, it expects non-GAAP EPS to grow 21 or 22% year over year, and it projects an even bigger improvement in non-GAAP operating margin—150 basis points total—than its previous guidance.
That’s what I’d call a great quarter: top-line growth, bottom-line growth, higher guidance for the year. And from talking with Glaser, it’s very clear that Synopsys is serious about continuing to improve margins, both short- and long-term. By the way, the company also has plenty of cash and almost no debt.
The numbers are even more impressive when you consider the volatility of the market that Synopsys operates in. Demand for semiconductors has been down, especially for the chips that go into consumer products. But Synopsys says that its customers continue to invest in R&D for new chips so they can take advantage of opportunities when demand does pick up again. In de Geus’ words, for the chipmakers, “The worst is to miss an upturn, because that’s where most of the money is made.”
Leadership in AI helps Synopsys navigate a turbulent market
Synopsys is able to deliver such strong results because it’s in the right market space with the right technology—and AI is at the heart of that. “I think there has been great sensitization in the last few months around the whole notion of AI,” de Geus told me. “And you know, this is not new to us, because we had predicted the world was going to go to ‘smart everything’ about 12 years ago and built in that direction.”
Right now, generative AI like ChatGPT is getting most of the attention in the press, but Synopsys has already spent many years using machine learning (ML), big data and different types of AI to help develop and debug chips. As covered in my earlier post, its recent announcement of the Synopsys.ai platform means that the company is now applying AI to the entire EDA stack: design, verification, test and manufacturing—even for analog chips.
How’s that approach working? In a word, great. Nine of the top 10 semiconductor companies are already using AI-driven tools from Synopsys, and the uptake for these tools is only picking up pace. It started in mid-2021, when Samsung announced it had achieved the first AI-driven commercial tape-out in the world by using Synopsys’ DSO.ai. (DSO stands for “Design Space Optimization.”) By the end of 2022, Synopsys customers had already achieved 100 AI-driven tape-outs, and when I talked to de Geus in the past two weeks, he told me that it’s now “well over 200 tape-outs.”
To pick one example out of the many design wins Synopsys achieved in the past quarter, the company is now collaborating with TSMC to deliver EDA flows for the foundry giant’s most advanced 2nm process node. Renesas achieved up to 10X improvement in reducing functional coverage holes and up to a 30 percent increase in verification productivity.
Just think about the demands of that in terms of design complexity and zero tolerance for error.
The role of hyperscalers and generative AI
While he didn’t name specific companies, de Geus did verify that the list of his customers using AI-driven tools includes not only pure-play chipmakers, but also hyperscalers. This makes sense when you consider that about 45% of Synopsys’ business comes from what de Geus calls “systems companies,” meaning companies that intersect both hardware and software. This includes big cloud providers, car makers and others that are massive users of computing power and that have particular needs for operating at lower power consumption.
For example, think about how much onboard computing power will be needed by the end of this decade for fully autonomous cars—and yet all that computing power must still be delivered at the lowest possible level of energy consumption. As de Geus said, “If [chips] go into a car, they also better be as low-power as possible, because every 100 watts that you use in compute essentially takes away 10 to 15 miles of distance that you can drive on the battery.”
To address these needs, generative AI of the type used in ChatGPT isn’t yet ready to contribute to EDA processes. “Generative AI has fantastic results” in other contexts, de Geus told me, “but its results can have imperfections. In what we do, there’s zero room for that. No, we are a company that has to deliver 99.99999—you know the number of nines can never be enough.” This is so because even the slightest error in a chip design can have huge consequences in terms of efficiency, yield, time-to-market and so on. “So therein lies a very profound distinction between the level of precision that our AI has to have for these designs versus these more search-type AI capabilities.”
Mind you, de Geus’ enthusiasm for generative AI came through loud and clear. “I think it’s fantastic, what’s happening” with generative AI, he said, “and this will impact the world massively.” In the longer term, Synopsys president and COO Sassine Ghazi believes that although these are early days of research and development, we believe generative AI is at an inflection point and a game changing technology that will offer significant opportunities for EDA applications.
Meanwhile, Synopsys engineers are exploring how cutting-edge large language models (LLMs) of the type used by ChatGPT can help streamline internal processes and augment existing solutions. According to Ghazi, “Synopsys pioneered AI-Driven chip design and this is the only the beginning of our AI journey to deliver productivity breakthroughs for customers.”
If you want to know more about the AI journey Synopsys has been on for the past several years, I recommend a fascinating post the company published after its recent SNUG conference, where Synopsys.ai took center stage. At the conference, Ghazi celebrated the crucial work done by Synopsys engineers who were inspired by the 2016 and 2017 victories of Google’s AI-driven AlphaGo over masters of the ancient game of Go. Those engineers began to dig into the ways that AlphaGo’s reinforcement learning (RL) techniques might be applied to EDA. What started as merely an idea then quickly became an MVP, and is now being used to design some of the world’s most complex multi-die architectures.
While generative AI will definitely have a role in the future of EDA—and Synopsys is bullish about taking a leadership role there as well—the company’s existing AI tools are providing big improvements in productivity and speed for chipmakers in the here-and-now.
The future of EDA is “Smart, Secure and Safe”
For de Geus and his team, the mission is broader than just using more AI to speed up and smooth out EDA. In his earnings report and again in his call with me, the CEO reiterated the need for Synopsys to help deliver technology that is smart, secure and safe.
The “smart” part gets harder all the time as companies demand more from chips: greater speed, more sophistication, higher performance-per-watt and so on. Besides the automotive example mentioned above, this certainly applies to things like AI-driven search, which requires several times as much computing power as traditional text search. Other chips used for AI functions are similarly compute-hungry.
De Geus made the point that Synopsys itself is adding to this demand. “We are using AI, very advanced AI, now ourselves to help our customers design super advanced chips,” he said. He compared it to the famous M. C. Escher picture of two hands drawing each other, adding that “I’ve always loved that representation because we are in the midst of that.”
He also believes that safety and security are “increasingly, jointly important” alongside smarts. This is so, he told me, because “the very complex system, if it’s not secure—there are enormous dangers in that.” He added that “these dangers get aggravated if they touch human life in any form.” It’s not hard to come up with contexts—cars, medical devices, navigation systems and so on—where any of us would insist on super-high safety and security.
For customers looking to bolster security and safety, de Geus said that Synopsys helps “by virtue of putting mechanism in chips that help encrypt the data, provide root of trust, a unique identification, et cetera. And that’s built into the IP and the design flows.”
For de Geus, all these considerations are “interwoven and looped, and the really fantastic thing for us is we’re literally in the middle of many of these loops” of design, AI utilization and so on. In his view, that gives Synopsys lots of room to run. As he put it, “The roadmap of optimizing, automating and generative AI use cases is wide open to deliver productivity breakthroughs for years to come.”
The future of AI in EDA
As he answered a question on the earnings call, de Geus summed up his business philosophy when he said, “Whenever the world changes, there’s opportunity.” I found that telling and ironic, given that is where technology industry analysts grow, when there’s chance.
Synopsys has certainly been following that path since it launched its Smart Everything strategy a dozen years ago, and especially since it started going deep into RL and other AI techniques around 2017. What once sounded like science fiction is now penetrating deeper and deeper into every part of the creation process for semiconductors, from the early stages of design to the final steps of manufacturing.
I agree with de Geus that Synopsys’ opportunities in this space are diverse and open-ended. Better, the company has shown year after year that it can take good ideas and quickly turn them into real-world engineering that addresses customers’ challenges. The combination of strategy and execution has really paid off for them, and I expect that to continue.
Terry Savage and I are collecting Social Security horror stories and posting them at my substack site, larrykotlikoff.substack.com. Sign up for free and take a read. If you have a story, …
I copy below a story we just received. I thought I’d heard it all. But here we have the case of an 81 year-old widow being clawed back for $6,000 that she supposedly received 45 years ago! The story was written by her daughter who also lives in terror that she will be clawed back for child benefits she received starting at age 2.
In one day, Congress can stop Social Security’s acts of financial terrorism. All it needs do is pass a bill that stipulates that system overpayments beyond 18 months are not clawed back unless the overpayments are due to fraud by the recipient.
As is almost always true, Social Security provided no explanation about this clawback. It sounds like the Social Security clerk signed up the mother to receive benefits even through she would lose them due to the earnings test. But, given the family benefit maximum, benefits not paid to the mother should have been reallocated to the children. Hence, there should be no overall family-wide clawback.
My guess is Social Security has no documents dating back 45 years to verify the “overpayment.” But if the system’s “brilliant” software is making this decision, why did it take 45 years to find its mistake? My sense is there is a 50-50 chance the clawback is incorrectly calculated. But even if it were, common decency says let it go.
Social Security Claws Back an 81 Year-Old Widow for Benefits Allegedly Received in 1978!
I read an article on line, several weeks ago, about how the Social Security Administration was harassing a young man for a supposed overpayment, that was sent to his mother, when he was a child..
I have a similar scenario I’ve been dealing with for about 2 years. It concerns my 80 yr old mother’s SS checks.. and your story now has me worried that this may haunt my sister and I unless we can get it resolved..
I couple yrs ago, my mother received a letter from SS.. saying that they believed she was overpaid approx $6000.. about 45 yrs ago.. She was a young widow, and my sister and I were eligible to collect benefits from our father’s SS.. starting in approx 1977.. I was about 14 when my father died of cancer.. My mother had worked part-time for several years prior to his death.. and she continued to do so.. I remember her telling me that when she spoke to the man at social security office, he said he would divide the amount we were eligible to receive, into 3 checks each month.. One to my mother, one to my mother for me and one to my mother for my sister.. She told the man she planned to continue working, but he seemed to believe she wouldn’t.. and said that he would just set it up so her name was already in the system.. and if she quit working, it would be easier to add her benefit to the amount.. I have heard that explanation several times over the years.. as to why she got a check in her name.. So she did continue to work.. and remarried a few yrs later.. My sister and I both continued to recieve checks in our own name after we turned 18 while we continued to take college classes. When we stopped going to college the checks stopped..
Many years go by; my mother continued to work off and on over the next 30 odd years.. Eventually she decided to collect her own social security at age 66, I believe.. She collected her approximately $980 social security for 12-13 years.. Now, the SS administration believes she was overpaid, 45 yrs ago and owes nearly $6000. The first letter said they would be keeping her entire check for the next 7 months unless she filled out a certain form, claiming she didn’t believe she owed the money, because it wasn’t her fault, or paying it back would cause undue hardship.. She made several calls to get an appointment at the social security office in Lake Mary Florida.. but this was during the covid pandemic, and everything had to be submitted electronically or left in the drop box.. She wrote a letter explaining that she did not believe she was overpaid.. but due to it being 45 years ago, she has no paperwork that gives details of what the benefit amount was supposed to be, or whether that amount included a widow benefit, or was just for the children.. We have never been shown any paperwork from SS that proves how they came to this conclusion.. and we have no way at this late date to argue her case..
Eventually, after filling out forms it was concluded that she hadn’t proved she did not prove that she did not owe the money.. but since her monthly check is her only source of income, SS agreed in a follow up letter, to take $51 dollars a month, for several years until the $6000 was paid back.. Her checks continued to arrive, minus the $51 .. Until the beginning of the following year, when she again received a letter stating the exact same thing; that she owed thousands of dollars, and the SS administration was going to keep all her entire check for several months until they were paid back!.. At this point the Lake Mary FL branch office had reopened.. and we went in and talked to someone in person, thinking surely this would be an easy fix.. but again, forms were filled out, nobody could explain why they weren’t honoring the letter they send the previous year stating that $51 a month would be collected for the next several years.. After several trips to speak to people in person, we finally got a supervisor that reinstated the original agreement.. but my 81 yr old mother swears to this day she doesn’t owe it.. but has no way to fight it and has not seen any evidence that she actually does.. She is constantly terrified that her check will one day just stop showing up.. and she needs that money desperately.. as I said, it is her only source of income.. and my step father is in the same boat, with his health quickly deteriorating..
The latest episode of the Made By Google podcast revealed an interesting piece of information about a new Pixel device in the works. Vice president and head of design for hardware, …
The latest episode of the Made By Google podcast revealed an interesting piece of information about a new Pixel device in the works.
Vice president and head of design for hardware, Ivy Ross, stated just under halfway through the discussion that the company had another foldable handset ready to go, but the team decided against releasing it because it wasn’t up to scratch.
“I’m really proud of the team because there was another foldable model that we had created, that we had the discipline to hold back and say nope – that’s not good enough yet – and really wait until we felt like we could do something that was good enough, or better, than what was out there already. It’s a real testimony to the fact that we’re able to do that and recognise that something isn’t good enough.” Ross explained.
Unfortunately the discussion ended there and the host moved on to another topic. This is an in-house podcast so there was no annoying reporter, like myself, to press for more details on the mystery device. Unless Google is cooking up an entirely new device category, like Motorola’s recently debuted rollable concept phone, it sounds like the tech company is building a clamshell-style flip phone. The “better than what was out there” line is telling because it suggests the new device has some competition already.
Samsung’s Galaxy Z Flip range is on its fourth iteration and we’re about to see a fourth version of Motorola’s Razr on June 1st. There are no details or rumours about specifications yet, but Google does have pedigree in getting the most out of its hardware with software trickery. This is particularly important for flip phones that have limited internal space because of a massive hinge in the middle of the device.
Samsung’s Flip 4 is the market leader right now.
Janhoi McGregor
A Tensor chip powering the mystery phone is likely, but what Google does with the camera setup is the real question. The Pixel Fold looks like it has a similar setup to the Pixel 7 range. We know that even the budget Pixel 7a can produce images on par with the 7 Pro with inferior hardware, so I’m confident Google can conjure up similar wizardry with the unreleased device.
The conversation in the podcast leading up to the new gadget revelation is about Google’s design principles and the work that went into making the Fold the thinnest on the market by engineering a strong, yet compact hinge. So the company may still be working on creating and finding space within the handset.
There is no guarantee that this is a flip phone, of course. Nor that it will ever be released. Google had planned to release a Pixel Watch back in 2016 alongside the original Pixel phone, but the wearable was pulled at the last minute because it wasn’t up to scratch. The official Pixel Watch was released in 2022 – six years later. We will have to see if this device ever makes it to market, but considering Google is already releasing a foldable phone soon, I doubt we’ll be waiting another six years.
House Speaker Kevin McCarthy (R-CA) and President Joe Biden reached an agreement over the weekend to limit spending for a small fraction of the federal government for a couple of years. …
House Speaker Kevin McCarthy (R-CA) and President Joe Biden reached an agreement over the weekend to limit spending for a small fraction of the federal government for a couple of years. But it would not come close to materially reducing annual deficits or slowing the growth of the national debt, which the Congressional Budget Office predicts is on pace to exceed $46 trillion by 2032.
The bill would cap spending discretionary spending (excluding entitlement programs such as Social Security and Medicare) through 2029. It would reduce that spending authority in fiscal year 2024 then increase it by 1% annually from 2025 to 2029. CBO estimates the plan would lower projected spending by about $64 billion in fiscal year 2024, or roughly 3.5%, and by about $107 billion in 2025.
The Loopholes
But there are two big caveats:
First, the agreement effectively is unenforceable after fiscal 2025. Any spending caps would have to be accepted by future congresses. And if history is any guide, they won’t be.
Second, many of the caps would be offset by a series of informal side deals, including some budget accounting gimmicks. Those agreements could mean non-entitlement domestic spending in 2024 would be only about 0.2% less than this year.
Why were these changes not included in the bill? Mostly so McCarthy could claim deeper spending reductions than he actually got. Since none of those informal agreements are in the actual legislation, they don’t show up in the official CBO score.
Inflating the savings is critical for McCarthy since the GOP’s fiscal demands have shrunk so dramatically in just five months. They started the year by vowing to balance the budget in a decade, which would have required lowering cumulative deficits by roughly $20 trillion over the period. By the time House Republicans passed their Limit, Save, Grow Act in April, they were aiming to cut deficits by about one-quarter of that, a total of about $4.8 trillion.
Now, McCarthy has agreed to cut planned spending by about $170 billion over the next two years, at least on paper. For reference, the federal government currently is on track to spend a total of $11.6 trillion in 2024 and 2025, excluding interest.
The result: A fiscal path that barely diverts from the one the nation already is on. It isn’t nothing. But it is far from the remaking of government that many in the GOP demanded.
A Preordained Outcome
Here is what happens to total non-interest federal spending, according to CBO. But remember, it overstates actual savings.
Federal spending under the Fiscal Responsibility Act of 2023
Tax Policy Center
This outcome has been ordained for weeks. In April, I predicted Congress would find a way to raise the debt limit without reducing the federal debt in any meaningful way.
Once Republicans agreed to take Social Security, Medicare, military spending, and veterans benefits off the table and refused to consider tax increases of any kind, they had no real way to achieve significant deficit reduction.
House Republicans left themselves with no choice but to focus all their cuts on only about one-seventh of non-interest federal spending. And many of those programs have powerful constituencies, even in the GOP caucus.
Rope-A-Dope
For their part, Biden and the Democrats made little attempt to push their own priorities in talks with McCarthy. Instead, Biden’s rope-a-dope tactics simply parried McCarthy’s best punches.
The GOP spends millions of dollars in negative ads and focuses much of its free media on attacking Joe Biden for his alleged age-related infirmities. But for a supposedly confused old guy, the president still plays the policy game awfully well.
Biden did concede a few issues. While he rejected a GOP push to toughen work requirements for Medicaid, he went along with some modest additional obligations for Supplemental Nutrition Assistance Program (food stamp) recipients. But also got a liberalization of other SNAP eligibility rules.
He agreed to shift about $28 billion in pandemic-related funding to other domestic spending. But chances are much of that Covid-19 money never would have been spent anyway.
The bill itself scales back the Inflation Reduction Act’s $80 billion bump in IRS spending by just $1.4 billion. But negotiators informally agreed to additional cuts of about $20 billion from the roughly $70 billion the IRS has not yet spent. The measure would “repurpose” about $10 billion in fiscal 2024, whatever that means, and shift $10 billion more to other domestic programs in fiscal 2025.
But keep in mind that the IRS has significant flexibility in when it spends the rest. So, at least in theory, it might be able to front-load some of its remaining $50 billion to the next couple of years and count on a future Democratic Congress and White House to restore the rest.
For all the bluster from the anti-government wing of the GOP and all the drama around the potential of a U.S. government default, the Biden-McCarthy agreement changes almost nothing. Except, of course, it puts off the next debt limit drama for another two years.
Many B2B organizations excel at customer experience (CX), but most don’t apply the same discipline to the buyer experience – the pre-purchase part of the customer journey. It’s an often-overlooked lever …
Many B2B organizations excel at customer experience (CX), but most don’t apply the same discipline to the buyer experience – the pre-purchase part of the customer journey. It’s an often-overlooked lever for accelerating growth. With B2B buyers more savvy and demanding than ever, taking a CX approach to understanding and managing the buyer experience can be a competitive advantage.
But what exactly do we mean by “buyer experience”? Forrester defines buyer experience as a buyer’s perceptions of the full set of actions, interactions, and engagements that buyers work through, from defining a problem and exploring solutions through to the selection of a vendor and the finalization and initial deployment of a purchase. Basically, this entails thinking beyond the conception and execution of a buyer’s journey map to doing the work to understand how buyers themselves feel about the process of purchasing from you. Here are three things to consider when thinking through how to manage the buyer experience at your organization.
1) B2B Buyers Are Not Satisfied
Understanding the buyer experience begins with understanding the buyer. This includes grasping both what satisfies the buyer and what doesn’t. After all, according to Forrester’s Buyers’ Journey Survey, 2022, most global purchase influencers reported at least one area of dissatisfaction with the winning vendor at the conclusion of a purchase. Buyer dissatisfaction doesn’t just mean the factors that dissuade them from moving forward with the purchase; it extends to winning vendors that don’t consistently help buyers realize value (what they get exceeds what they give up in pursuit of their goal) during the purchasing process. And that’s where customer value comes in.
2) Customer Value Is The Key To Customer Obsession
One of the most important aspects of B2B buying behavior is how buyers perceive value from vendors. They are looking for relationships, connections, and benefits that go well beyond the product or service being offered. And that starts with the buying process. Understanding customer value is vital to managing a customer-obsessed buyer experience in which a buyer perceives value in their journey through the buying process itself. For example, how easily are they able to find the information they need? Was the process smooth and efficient, or did they have to make multiple request to move forward? Or how well do they feel understood and respected? B2B buyers that perceive value are more likely to be satisfied during each phase of the buyer’s journey and become better, more profitable long-term customers.
3) Portfolio Marketing Teams Should Take The Lead On Buyer Experience
Because of their focus on understanding buyer personas and buyer journey mapping, portfolio marketing teams are uniquely positioned and well equipped to orchestrate improvements to the buyer experience. With these skills already in their toolbox, portfolio marketers should expand persona profiles and journey maps to include how buyers perceive value during the buying process. Many functions across the organization contribute to the buyer experience – campaign, digital and web teams, demand generation, content strategy, sales, and more. Portfolio marketers need to coordinate an aligned effort to orchestrate improvements across the various teams that deliver the touchpoints that make up the buyer’s holistic experience.
To learn more, register to attend B2B Summit North America here.
This post was written by VP, Principal Analyst Barbara Winters and it originally appeared here.